00:00:00.000 Started by upstream project "autotest-per-patch" build number 130871 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.108 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.109 The recommended git tool is: git 00:00:00.110 using credential 00000000-0000-0000-0000-000000000002 00:00:00.112 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.174 Fetching changes from the remote Git repository 00:00:00.176 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.229 Using shallow fetch with depth 1 00:00:00.229 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.229 > git --version # timeout=10 00:00:00.268 > git --version # 'git version 2.39.2' 00:00:00.268 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.292 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.292 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:08.116 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:08.128 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:08.141 Checking out Revision f95f9907808933a1db7196e15e13478e0f322ee7 (FETCH_HEAD) 00:00:08.141 > git config core.sparsecheckout # timeout=10 00:00:08.151 > git read-tree -mu HEAD # timeout=10 00:00:08.168 > git checkout -f f95f9907808933a1db7196e15e13478e0f322ee7 # timeout=5 00:00:08.188 Commit message: "Revert "autotest-phy: replace deprecated label for nvmf-cvl"" 00:00:08.189 > git rev-list --no-walk 67cd2f1639a8077ee9fc0f9259e068d0e5b67761 # timeout=10 00:00:08.442 [Pipeline] Start of Pipeline 00:00:08.458 [Pipeline] library 00:00:08.460 Loading library shm_lib@master 00:00:08.460 Library shm_lib@master is cached. Copying from home. 00:00:08.475 [Pipeline] node 00:00:08.484 Running on VM-host-WFP1 in /var/jenkins/workspace/nvme-vg-autotest_2 00:00:08.485 [Pipeline] { 00:00:08.491 [Pipeline] catchError 00:00:08.492 [Pipeline] { 00:00:08.501 [Pipeline] wrap 00:00:08.508 [Pipeline] { 00:00:08.514 [Pipeline] stage 00:00:08.515 [Pipeline] { (Prologue) 00:00:08.526 [Pipeline] echo 00:00:08.527 Node: VM-host-WFP1 00:00:08.531 [Pipeline] cleanWs 00:00:08.540 [WS-CLEANUP] Deleting project workspace... 00:00:08.540 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.546 [WS-CLEANUP] done 00:00:08.813 [Pipeline] setCustomBuildProperty 00:00:08.931 [Pipeline] httpRequest 00:00:09.537 [Pipeline] echo 00:00:09.538 Sorcerer 10.211.164.101 is alive 00:00:09.546 [Pipeline] retry 00:00:09.547 [Pipeline] { 00:00:09.558 [Pipeline] httpRequest 00:00:09.564 HttpMethod: GET 00:00:09.564 URL: http://10.211.164.101/packages/jbp_f95f9907808933a1db7196e15e13478e0f322ee7.tar.gz 00:00:09.566 Sending request to url: http://10.211.164.101/packages/jbp_f95f9907808933a1db7196e15e13478e0f322ee7.tar.gz 00:00:09.582 Response Code: HTTP/1.1 200 OK 00:00:09.583 Success: Status code 200 is in the accepted range: 200,404 00:00:09.583 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/jbp_f95f9907808933a1db7196e15e13478e0f322ee7.tar.gz 00:00:36.409 [Pipeline] } 00:00:36.426 [Pipeline] // retry 00:00:36.434 [Pipeline] sh 00:00:36.721 + tar --no-same-owner -xf jbp_f95f9907808933a1db7196e15e13478e0f322ee7.tar.gz 00:00:36.809 [Pipeline] httpRequest 00:00:37.179 [Pipeline] echo 00:00:37.180 Sorcerer 10.211.164.101 is alive 00:00:37.190 [Pipeline] retry 00:00:37.192 [Pipeline] { 00:00:37.206 [Pipeline] httpRequest 00:00:37.210 HttpMethod: GET 00:00:37.211 URL: http://10.211.164.101/packages/spdk_d16db39ee342e0479057d263a9944f38a2a1af94.tar.gz 00:00:37.211 Sending request to url: http://10.211.164.101/packages/spdk_d16db39ee342e0479057d263a9944f38a2a1af94.tar.gz 00:00:37.240 Response Code: HTTP/1.1 200 OK 00:00:37.241 Success: Status code 200 is in the accepted range: 200,404 00:00:37.241 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/spdk_d16db39ee342e0479057d263a9944f38a2a1af94.tar.gz 00:04:26.117 [Pipeline] } 00:04:26.135 [Pipeline] // retry 00:04:26.142 [Pipeline] sh 00:04:26.467 + tar --no-same-owner -xf spdk_d16db39ee342e0479057d263a9944f38a2a1af94.tar.gz 00:04:29.010 [Pipeline] sh 00:04:29.328 + git -C spdk log --oneline -n5 00:04:29.328 d16db39ee bdev/nvme: removed 'multipath' param from spdk_bdev_nvme_create() 00:04:29.328 32fb30b70 bdev/nvme: changed default config to multipath 00:04:29.328 397c5fc31 bdev/nvme: ctrl config consistency check 00:04:29.328 3950cd1bb bdev/nvme: Change spdk_bdev_reset() to succeed if at least one nvme_ctrlr is reconnected 00:04:29.328 f9141d271 test/blob: Add BLOCKLEN macro in blob_ut 00:04:29.342 [Pipeline] writeFile 00:04:29.352 [Pipeline] sh 00:04:29.631 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:04:29.641 [Pipeline] sh 00:04:29.925 + cat autorun-spdk.conf 00:04:29.925 SPDK_RUN_FUNCTIONAL_TEST=1 00:04:29.925 SPDK_TEST_NVME=1 00:04:29.925 SPDK_TEST_FTL=1 00:04:29.925 SPDK_TEST_ISAL=1 00:04:29.925 SPDK_RUN_ASAN=1 00:04:29.925 SPDK_RUN_UBSAN=1 00:04:29.925 SPDK_TEST_XNVME=1 00:04:29.925 SPDK_TEST_NVME_FDP=1 00:04:29.925 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:29.931 RUN_NIGHTLY=0 00:04:29.932 [Pipeline] } 00:04:29.945 [Pipeline] // stage 00:04:29.960 [Pipeline] stage 00:04:29.962 [Pipeline] { (Run VM) 00:04:29.973 [Pipeline] sh 00:04:30.254 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:04:30.254 + echo 'Start stage prepare_nvme.sh' 00:04:30.254 Start stage prepare_nvme.sh 00:04:30.254 + [[ -n 4 ]] 00:04:30.254 + disk_prefix=ex4 00:04:30.254 + [[ -n /var/jenkins/workspace/nvme-vg-autotest_2 ]] 00:04:30.254 + [[ -e /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf ]] 00:04:30.254 + source /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf 00:04:30.254 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:30.254 ++ SPDK_TEST_NVME=1 00:04:30.254 ++ SPDK_TEST_FTL=1 00:04:30.254 ++ SPDK_TEST_ISAL=1 00:04:30.254 ++ SPDK_RUN_ASAN=1 00:04:30.254 ++ SPDK_RUN_UBSAN=1 00:04:30.254 ++ SPDK_TEST_XNVME=1 00:04:30.254 ++ SPDK_TEST_NVME_FDP=1 00:04:30.254 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:30.254 ++ RUN_NIGHTLY=0 00:04:30.254 + cd /var/jenkins/workspace/nvme-vg-autotest_2 00:04:30.254 + nvme_files=() 00:04:30.254 + declare -A nvme_files 00:04:30.254 + backend_dir=/var/lib/libvirt/images/backends 00:04:30.254 + nvme_files['nvme.img']=5G 00:04:30.254 + nvme_files['nvme-cmb.img']=5G 00:04:30.254 + nvme_files['nvme-multi0.img']=4G 00:04:30.254 + nvme_files['nvme-multi1.img']=4G 00:04:30.254 + nvme_files['nvme-multi2.img']=4G 00:04:30.254 + nvme_files['nvme-openstack.img']=8G 00:04:30.254 + nvme_files['nvme-zns.img']=5G 00:04:30.254 + (( SPDK_TEST_NVME_PMR == 1 )) 00:04:30.254 + (( SPDK_TEST_FTL == 1 )) 00:04:30.254 + nvme_files["nvme-ftl.img"]=6G 00:04:30.254 + (( SPDK_TEST_NVME_FDP == 1 )) 00:04:30.254 + nvme_files["nvme-fdp.img"]=1G 00:04:30.254 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:04:30.254 + for nvme in "${!nvme_files[@]}" 00:04:30.254 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:04:30.254 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:04:30.254 + for nvme in "${!nvme_files[@]}" 00:04:30.254 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-ftl.img -s 6G 00:04:30.254 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:04:30.254 + for nvme in "${!nvme_files[@]}" 00:04:30.254 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:04:30.254 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:04:30.254 + for nvme in "${!nvme_files[@]}" 00:04:30.254 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:04:30.254 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:04:30.254 + for nvme in "${!nvme_files[@]}" 00:04:30.254 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:04:30.254 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:04:30.254 + for nvme in "${!nvme_files[@]}" 00:04:30.254 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:04:30.512 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:04:30.512 + for nvme in "${!nvme_files[@]}" 00:04:30.512 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:04:30.512 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:04:30.512 + for nvme in "${!nvme_files[@]}" 00:04:30.512 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-fdp.img -s 1G 00:04:30.512 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:04:30.512 + for nvme in "${!nvme_files[@]}" 00:04:30.512 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:04:30.512 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:04:30.512 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:04:30.512 + echo 'End stage prepare_nvme.sh' 00:04:30.512 End stage prepare_nvme.sh 00:04:30.521 [Pipeline] sh 00:04:30.798 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:04:30.798 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex4-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:04:30.798 00:04:30.798 DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant 00:04:30.798 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk 00:04:30.798 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest_2 00:04:30.798 HELP=0 00:04:30.798 DRY_RUN=0 00:04:30.798 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme-ftl.img,/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,/var/lib/libvirt/images/backends/ex4-nvme-fdp.img, 00:04:30.798 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:04:30.798 NVME_AUTO_CREATE=0 00:04:30.798 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,, 00:04:30.798 NVME_CMB=,,,, 00:04:30.798 NVME_PMR=,,,, 00:04:30.798 NVME_ZNS=,,,, 00:04:30.798 NVME_MS=true,,,, 00:04:30.798 NVME_FDP=,,,on, 00:04:30.798 SPDK_VAGRANT_DISTRO=fedora39 00:04:30.798 SPDK_VAGRANT_VMCPU=10 00:04:30.798 SPDK_VAGRANT_VMRAM=12288 00:04:30.798 SPDK_VAGRANT_PROVIDER=libvirt 00:04:30.798 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:04:30.798 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:04:30.798 SPDK_OPENSTACK_NETWORK=0 00:04:30.798 VAGRANT_PACKAGE_BOX=0 00:04:30.798 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:04:30.798 FORCE_DISTRO=true 00:04:30.798 VAGRANT_BOX_VERSION= 00:04:30.798 EXTRA_VAGRANTFILES= 00:04:30.798 NIC_MODEL=e1000 00:04:30.798 00:04:30.798 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt' 00:04:30.798 /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest_2 00:04:34.084 Bringing machine 'default' up with 'libvirt' provider... 00:04:35.019 ==> default: Creating image (snapshot of base box volume). 00:04:35.278 ==> default: Creating domain with the following settings... 00:04:35.278 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1728299715_26d6d76ea016db56431c 00:04:35.278 ==> default: -- Domain type: kvm 00:04:35.278 ==> default: -- Cpus: 10 00:04:35.278 ==> default: -- Feature: acpi 00:04:35.278 ==> default: -- Feature: apic 00:04:35.278 ==> default: -- Feature: pae 00:04:35.278 ==> default: -- Memory: 12288M 00:04:35.278 ==> default: -- Memory Backing: hugepages: 00:04:35.278 ==> default: -- Management MAC: 00:04:35.278 ==> default: -- Loader: 00:04:35.278 ==> default: -- Nvram: 00:04:35.278 ==> default: -- Base box: spdk/fedora39 00:04:35.278 ==> default: -- Storage pool: default 00:04:35.278 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1728299715_26d6d76ea016db56431c.img (20G) 00:04:35.278 ==> default: -- Volume Cache: default 00:04:35.278 ==> default: -- Kernel: 00:04:35.278 ==> default: -- Initrd: 00:04:35.278 ==> default: -- Graphics Type: vnc 00:04:35.278 ==> default: -- Graphics Port: -1 00:04:35.278 ==> default: -- Graphics IP: 127.0.0.1 00:04:35.278 ==> default: -- Graphics Password: Not defined 00:04:35.278 ==> default: -- Video Type: cirrus 00:04:35.278 ==> default: -- Video VRAM: 9216 00:04:35.278 ==> default: -- Sound Type: 00:04:35.278 ==> default: -- Keymap: en-us 00:04:35.278 ==> default: -- TPM Path: 00:04:35.278 ==> default: -- INPUT: type=mouse, bus=ps2 00:04:35.278 ==> default: -- Command line args: 00:04:35.278 ==> default: -> value=-device, 00:04:35.278 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:04:35.278 ==> default: -> value=-drive, 00:04:35.278 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:04:35.278 ==> default: -> value=-device, 00:04:35.278 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:04:35.278 ==> default: -> value=-device, 00:04:35.278 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:04:35.278 ==> default: -> value=-drive, 00:04:35.278 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-1-drive0, 00:04:35.278 ==> default: -> value=-device, 00:04:35.278 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:04:35.278 ==> default: -> value=-device, 00:04:35.278 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:04:35.278 ==> default: -> value=-drive, 00:04:35.278 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:04:35.278 ==> default: -> value=-device, 00:04:35.278 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:04:35.278 ==> default: -> value=-drive, 00:04:35.278 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:04:35.278 ==> default: -> value=-device, 00:04:35.278 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:04:35.278 ==> default: -> value=-drive, 00:04:35.278 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:04:35.278 ==> default: -> value=-device, 00:04:35.278 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:04:35.278 ==> default: -> value=-device, 00:04:35.278 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:04:35.278 ==> default: -> value=-device, 00:04:35.278 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:04:35.278 ==> default: -> value=-drive, 00:04:35.278 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:04:35.278 ==> default: -> value=-device, 00:04:35.278 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:04:35.537 ==> default: Creating shared folders metadata... 00:04:35.537 ==> default: Starting domain. 00:04:37.437 ==> default: Waiting for domain to get an IP address... 00:04:55.507 ==> default: Waiting for SSH to become available... 00:04:55.507 ==> default: Configuring and enabling network interfaces... 00:05:00.776 default: SSH address: 192.168.121.188:22 00:05:00.776 default: SSH username: vagrant 00:05:00.776 default: SSH auth method: private key 00:05:04.068 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:05:14.139 ==> default: Mounting SSHFS shared folder... 00:05:15.076 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:05:15.076 ==> default: Checking Mount.. 00:05:16.981 ==> default: Folder Successfully Mounted! 00:05:16.981 ==> default: Running provisioner: file... 00:05:17.917 default: ~/.gitconfig => .gitconfig 00:05:18.483 00:05:18.483 SUCCESS! 00:05:18.483 00:05:18.483 cd to /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt and type "vagrant ssh" to use. 00:05:18.483 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:05:18.483 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt" to destroy all trace of vm. 00:05:18.483 00:05:18.492 [Pipeline] } 00:05:18.507 [Pipeline] // stage 00:05:18.515 [Pipeline] dir 00:05:18.516 Running in /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt 00:05:18.517 [Pipeline] { 00:05:18.530 [Pipeline] catchError 00:05:18.531 [Pipeline] { 00:05:18.543 [Pipeline] sh 00:05:18.823 + vagrant ssh-config --host vagrant 00:05:18.823 + sed -ne+ /^Host/,$p 00:05:18.823 tee ssh_conf 00:05:22.149 Host vagrant 00:05:22.149 HostName 192.168.121.188 00:05:22.149 User vagrant 00:05:22.149 Port 22 00:05:22.149 UserKnownHostsFile /dev/null 00:05:22.149 StrictHostKeyChecking no 00:05:22.149 PasswordAuthentication no 00:05:22.149 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:05:22.149 IdentitiesOnly yes 00:05:22.149 LogLevel FATAL 00:05:22.149 ForwardAgent yes 00:05:22.149 ForwardX11 yes 00:05:22.149 00:05:22.162 [Pipeline] withEnv 00:05:22.165 [Pipeline] { 00:05:22.177 [Pipeline] sh 00:05:22.457 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:05:22.457 source /etc/os-release 00:05:22.457 [[ -e /image.version ]] && img=$(< /image.version) 00:05:22.457 # Minimal, systemd-like check. 00:05:22.457 if [[ -e /.dockerenv ]]; then 00:05:22.457 # Clear garbage from the node's name: 00:05:22.457 # agt-er_autotest_547-896 -> autotest_547-896 00:05:22.457 # $HOSTNAME is the actual container id 00:05:22.457 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:05:22.457 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:05:22.457 # We can assume this is a mount from a host where container is running, 00:05:22.457 # so fetch its hostname to easily identify the target swarm worker. 00:05:22.457 container="$(< /etc/hostname) ($agent)" 00:05:22.457 else 00:05:22.457 # Fallback 00:05:22.457 container=$agent 00:05:22.457 fi 00:05:22.457 fi 00:05:22.457 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:05:22.457 00:05:22.725 [Pipeline] } 00:05:22.739 [Pipeline] // withEnv 00:05:22.745 [Pipeline] setCustomBuildProperty 00:05:22.757 [Pipeline] stage 00:05:22.759 [Pipeline] { (Tests) 00:05:22.772 [Pipeline] sh 00:05:23.051 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:05:23.324 [Pipeline] sh 00:05:23.679 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:05:23.950 [Pipeline] timeout 00:05:23.950 Timeout set to expire in 50 min 00:05:23.952 [Pipeline] { 00:05:23.966 [Pipeline] sh 00:05:24.244 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:05:24.812 HEAD is now at d16db39ee bdev/nvme: removed 'multipath' param from spdk_bdev_nvme_create() 00:05:24.824 [Pipeline] sh 00:05:25.105 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:05:25.376 [Pipeline] sh 00:05:25.695 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:05:25.976 [Pipeline] sh 00:05:26.257 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:05:26.517 ++ readlink -f spdk_repo 00:05:26.517 + DIR_ROOT=/home/vagrant/spdk_repo 00:05:26.517 + [[ -n /home/vagrant/spdk_repo ]] 00:05:26.517 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:05:26.517 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:05:26.517 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:05:26.517 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:05:26.517 + [[ -d /home/vagrant/spdk_repo/output ]] 00:05:26.517 + [[ nvme-vg-autotest == pkgdep-* ]] 00:05:26.517 + cd /home/vagrant/spdk_repo 00:05:26.517 + source /etc/os-release 00:05:26.517 ++ NAME='Fedora Linux' 00:05:26.517 ++ VERSION='39 (Cloud Edition)' 00:05:26.517 ++ ID=fedora 00:05:26.517 ++ VERSION_ID=39 00:05:26.517 ++ VERSION_CODENAME= 00:05:26.517 ++ PLATFORM_ID=platform:f39 00:05:26.517 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:05:26.517 ++ ANSI_COLOR='0;38;2;60;110;180' 00:05:26.517 ++ LOGO=fedora-logo-icon 00:05:26.517 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:05:26.517 ++ HOME_URL=https://fedoraproject.org/ 00:05:26.517 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:05:26.517 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:05:26.517 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:05:26.517 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:05:26.517 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:05:26.517 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:05:26.517 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:05:26.517 ++ SUPPORT_END=2024-11-12 00:05:26.517 ++ VARIANT='Cloud Edition' 00:05:26.517 ++ VARIANT_ID=cloud 00:05:26.517 + uname -a 00:05:26.517 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:05:26.517 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:27.084 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:27.342 Hugepages 00:05:27.342 node hugesize free / total 00:05:27.342 node0 1048576kB 0 / 0 00:05:27.342 node0 2048kB 0 / 0 00:05:27.342 00:05:27.342 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:27.342 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:27.342 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:27.342 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:27.342 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:05:27.600 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:27.600 + rm -f /tmp/spdk-ld-path 00:05:27.600 + source autorun-spdk.conf 00:05:27.601 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:27.601 ++ SPDK_TEST_NVME=1 00:05:27.601 ++ SPDK_TEST_FTL=1 00:05:27.601 ++ SPDK_TEST_ISAL=1 00:05:27.601 ++ SPDK_RUN_ASAN=1 00:05:27.601 ++ SPDK_RUN_UBSAN=1 00:05:27.601 ++ SPDK_TEST_XNVME=1 00:05:27.601 ++ SPDK_TEST_NVME_FDP=1 00:05:27.601 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:27.601 ++ RUN_NIGHTLY=0 00:05:27.601 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:05:27.601 + [[ -n '' ]] 00:05:27.601 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:05:27.601 + for M in /var/spdk/build-*-manifest.txt 00:05:27.601 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:05:27.601 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:05:27.601 + for M in /var/spdk/build-*-manifest.txt 00:05:27.601 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:05:27.601 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:05:27.601 + for M in /var/spdk/build-*-manifest.txt 00:05:27.601 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:05:27.601 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:05:27.601 ++ uname 00:05:27.601 + [[ Linux == \L\i\n\u\x ]] 00:05:27.601 + sudo dmesg -T 00:05:27.601 + sudo dmesg --clear 00:05:27.601 + dmesg_pid=5243 00:05:27.601 + sudo dmesg -Tw 00:05:27.601 + [[ Fedora Linux == FreeBSD ]] 00:05:27.601 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:27.601 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:27.601 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:05:27.601 + [[ -x /usr/src/fio-static/fio ]] 00:05:27.601 + export FIO_BIN=/usr/src/fio-static/fio 00:05:27.601 + FIO_BIN=/usr/src/fio-static/fio 00:05:27.601 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:05:27.601 + [[ ! -v VFIO_QEMU_BIN ]] 00:05:27.601 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:05:27.601 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:27.601 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:27.601 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:05:27.601 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:27.601 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:27.601 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:27.601 Test configuration: 00:05:27.601 SPDK_RUN_FUNCTIONAL_TEST=1 00:05:27.601 SPDK_TEST_NVME=1 00:05:27.601 SPDK_TEST_FTL=1 00:05:27.601 SPDK_TEST_ISAL=1 00:05:27.601 SPDK_RUN_ASAN=1 00:05:27.601 SPDK_RUN_UBSAN=1 00:05:27.601 SPDK_TEST_XNVME=1 00:05:27.601 SPDK_TEST_NVME_FDP=1 00:05:27.601 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:27.859 RUN_NIGHTLY=0 11:16:09 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:05:27.859 11:16:09 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:27.859 11:16:09 -- scripts/common.sh@15 -- $ shopt -s extglob 00:05:27.859 11:16:09 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:05:27.859 11:16:09 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:27.859 11:16:09 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:27.859 11:16:09 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.859 11:16:09 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.859 11:16:09 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.859 11:16:09 -- paths/export.sh@5 -- $ export PATH 00:05:27.859 11:16:09 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.859 11:16:09 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:05:27.859 11:16:09 -- common/autobuild_common.sh@486 -- $ date +%s 00:05:27.859 11:16:09 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728299769.XXXXXX 00:05:27.859 11:16:09 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728299769.0010y4 00:05:27.859 11:16:09 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:05:27.859 11:16:09 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:05:27.859 11:16:09 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:05:27.859 11:16:09 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:05:27.859 11:16:09 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:05:27.860 11:16:09 -- common/autobuild_common.sh@502 -- $ get_config_params 00:05:27.860 11:16:09 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:05:27.860 11:16:09 -- common/autotest_common.sh@10 -- $ set +x 00:05:27.860 11:16:09 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:05:27.860 11:16:09 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:05:27.860 11:16:09 -- pm/common@17 -- $ local monitor 00:05:27.860 11:16:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:27.860 11:16:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:27.860 11:16:09 -- pm/common@21 -- $ date +%s 00:05:27.860 11:16:09 -- pm/common@25 -- $ sleep 1 00:05:27.860 11:16:09 -- pm/common@21 -- $ date +%s 00:05:27.860 11:16:09 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1728299769 00:05:27.860 11:16:09 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1728299769 00:05:27.860 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1728299769_collect-vmstat.pm.log 00:05:27.860 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1728299769_collect-cpu-load.pm.log 00:05:28.797 11:16:10 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:05:28.797 11:16:10 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:05:28.797 11:16:10 -- spdk/autobuild.sh@12 -- $ umask 022 00:05:28.797 11:16:10 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:05:28.797 11:16:10 -- spdk/autobuild.sh@16 -- $ date -u 00:05:28.797 Mon Oct 7 11:16:10 AM UTC 2024 00:05:28.797 11:16:10 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:05:28.797 v25.01-pre-38-gd16db39ee 00:05:28.797 11:16:10 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:05:28.797 11:16:10 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:05:28.797 11:16:10 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:05:28.797 11:16:10 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:05:28.797 11:16:10 -- common/autotest_common.sh@10 -- $ set +x 00:05:28.797 ************************************ 00:05:28.797 START TEST asan 00:05:28.797 ************************************ 00:05:29.057 using asan 00:05:29.057 11:16:10 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:05:29.057 00:05:29.057 real 0m0.001s 00:05:29.057 user 0m0.000s 00:05:29.057 sys 0m0.000s 00:05:29.057 11:16:10 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:05:29.057 11:16:10 asan -- common/autotest_common.sh@10 -- $ set +x 00:05:29.057 ************************************ 00:05:29.057 END TEST asan 00:05:29.057 ************************************ 00:05:29.057 11:16:10 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:05:29.057 11:16:10 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:05:29.057 11:16:10 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:05:29.057 11:16:10 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:05:29.057 11:16:10 -- common/autotest_common.sh@10 -- $ set +x 00:05:29.057 ************************************ 00:05:29.057 START TEST ubsan 00:05:29.057 ************************************ 00:05:29.057 using ubsan 00:05:29.057 11:16:10 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:05:29.057 00:05:29.057 real 0m0.001s 00:05:29.057 user 0m0.000s 00:05:29.057 sys 0m0.000s 00:05:29.057 11:16:10 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:05:29.057 11:16:10 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:05:29.057 ************************************ 00:05:29.057 END TEST ubsan 00:05:29.057 ************************************ 00:05:29.057 11:16:10 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:05:29.057 11:16:10 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:05:29.057 11:16:10 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:05:29.057 11:16:10 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:05:29.057 11:16:10 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:05:29.057 11:16:10 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:05:29.057 11:16:10 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:05:29.057 11:16:10 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:05:29.057 11:16:10 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:05:29.316 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:29.316 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:05:29.885 Using 'verbs' RDMA provider 00:05:49.380 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:06:04.262 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:06:04.262 Creating mk/config.mk...done. 00:06:04.262 Creating mk/cc.flags.mk...done. 00:06:04.262 Type 'make' to build. 00:06:04.262 11:16:44 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:06:04.262 11:16:44 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:06:04.262 11:16:44 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:06:04.262 11:16:44 -- common/autotest_common.sh@10 -- $ set +x 00:06:04.262 ************************************ 00:06:04.262 START TEST make 00:06:04.262 ************************************ 00:06:04.262 11:16:44 make -- common/autotest_common.sh@1125 -- $ make -j10 00:06:04.262 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:06:04.262 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:06:04.262 meson setup builddir \ 00:06:04.262 -Dwith-libaio=enabled \ 00:06:04.262 -Dwith-liburing=enabled \ 00:06:04.262 -Dwith-libvfn=disabled \ 00:06:04.262 -Dwith-spdk=false && \ 00:06:04.262 meson compile -C builddir && \ 00:06:04.262 cd -) 00:06:04.262 make[1]: Nothing to be done for 'all'. 00:06:06.799 The Meson build system 00:06:06.799 Version: 1.5.0 00:06:06.799 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:06:06.799 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:06:06.799 Build type: native build 00:06:06.799 Project name: xnvme 00:06:06.799 Project version: 0.7.3 00:06:06.799 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:06:06.799 C linker for the host machine: cc ld.bfd 2.40-14 00:06:06.799 Host machine cpu family: x86_64 00:06:06.799 Host machine cpu: x86_64 00:06:06.799 Message: host_machine.system: linux 00:06:06.799 Compiler for C supports arguments -Wno-missing-braces: YES 00:06:06.799 Compiler for C supports arguments -Wno-cast-function-type: YES 00:06:06.799 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:06:06.799 Run-time dependency threads found: YES 00:06:06.799 Has header "setupapi.h" : NO 00:06:06.799 Has header "linux/blkzoned.h" : YES 00:06:06.799 Has header "linux/blkzoned.h" : YES (cached) 00:06:06.799 Has header "libaio.h" : YES 00:06:06.799 Library aio found: YES 00:06:06.799 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:06:06.799 Run-time dependency liburing found: YES 2.2 00:06:06.799 Dependency libvfn skipped: feature with-libvfn disabled 00:06:06.799 Run-time dependency appleframeworks found: NO (tried framework) 00:06:06.799 Run-time dependency appleframeworks found: NO (tried framework) 00:06:06.799 Configuring xnvme_config.h using configuration 00:06:06.799 Configuring xnvme.spec using configuration 00:06:06.799 Run-time dependency bash-completion found: YES 2.11 00:06:06.799 Message: Bash-completions: /usr/share/bash-completion/completions 00:06:06.799 Program cp found: YES (/usr/bin/cp) 00:06:06.799 Has header "winsock2.h" : NO 00:06:06.799 Has header "dbghelp.h" : NO 00:06:06.799 Library rpcrt4 found: NO 00:06:06.799 Library rt found: YES 00:06:06.799 Checking for function "clock_gettime" with dependency -lrt: YES 00:06:06.799 Found CMake: /usr/bin/cmake (3.27.7) 00:06:06.799 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:06:06.799 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:06:06.800 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:06:06.800 Build targets in project: 32 00:06:06.800 00:06:06.800 xnvme 0.7.3 00:06:06.800 00:06:06.800 User defined options 00:06:06.800 with-libaio : enabled 00:06:06.800 with-liburing: enabled 00:06:06.800 with-libvfn : disabled 00:06:06.800 with-spdk : false 00:06:06.800 00:06:06.800 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:06:07.062 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:06:07.062 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:06:07.062 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:06:07.062 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:06:07.062 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:06:07.062 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:06:07.062 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:06:07.062 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:06:07.062 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:06:07.062 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:06:07.062 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:06:07.062 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:06:07.320 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:06:07.320 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:06:07.321 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:06:07.321 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:06:07.321 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:06:07.321 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:06:07.321 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:06:07.321 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:06:07.321 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:06:07.321 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:06:07.321 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:06:07.321 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:06:07.321 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:06:07.321 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:06:07.321 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:06:07.321 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:06:07.321 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:06:07.579 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:06:07.579 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:06:07.579 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:06:07.579 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:06:07.579 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:06:07.579 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:06:07.579 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:06:07.579 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:06:07.579 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:06:07.579 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:06:07.579 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:06:07.579 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:06:07.579 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:06:07.579 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:06:07.579 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:06:07.579 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:06:07.579 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:06:07.579 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:06:07.579 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:06:07.579 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:06:07.579 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:06:07.579 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:06:07.579 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:06:07.579 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:06:07.579 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:06:07.579 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:06:07.579 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:06:07.837 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:06:07.837 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:06:07.837 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:06:07.837 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:06:07.837 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:06:07.837 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:06:07.837 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:06:07.837 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:06:07.837 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:06:07.837 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:06:07.838 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:06:07.838 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:06:07.838 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:06:07.838 [69/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:06:07.838 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:06:07.838 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:06:07.838 [72/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:06:07.838 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:06:08.096 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:06:08.096 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:06:08.096 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:06:08.096 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:06:08.096 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:06:08.096 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:06:08.096 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:06:08.096 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:06:08.096 [82/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:06:08.096 [83/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:06:08.096 [84/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:06:08.096 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:06:08.096 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:06:08.096 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:06:08.096 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:06:08.356 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:06:08.356 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:06:08.356 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:06:08.356 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:06:08.356 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:06:08.356 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:06:08.356 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:06:08.356 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:06:08.356 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:06:08.356 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:06:08.356 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:06:08.356 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:06:08.356 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:06:08.356 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:06:08.356 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:06:08.356 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:06:08.356 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:06:08.356 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:06:08.356 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:06:08.356 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:06:08.356 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:06:08.356 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:06:08.356 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:06:08.356 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:06:08.356 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:06:08.356 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:06:08.356 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:06:08.615 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:06:08.615 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:06:08.615 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:06:08.615 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:06:08.615 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:06:08.615 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:06:08.615 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:06:08.615 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:06:08.615 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:06:08.615 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:06:08.615 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:06:08.615 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:06:08.615 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:06:08.615 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:06:08.615 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:06:08.615 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:06:08.615 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:06:08.874 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:06:08.874 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:06:08.874 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:06:08.874 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:06:08.874 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:06:08.874 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:06:08.874 [139/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:06:08.874 [140/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:06:08.874 [141/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:06:08.874 [142/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:06:08.874 [143/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:06:08.874 [144/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:06:08.874 [145/203] Linking target lib/libxnvme.so 00:06:08.874 [146/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:06:08.874 [147/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:06:08.874 [148/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:06:09.133 [149/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:06:09.133 [150/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:06:09.133 [151/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:06:09.133 [152/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:06:09.133 [153/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:06:09.133 [154/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:06:09.133 [155/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:06:09.133 [156/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:06:09.133 [157/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:06:09.133 [158/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:06:09.133 [159/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:06:09.133 [160/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:06:09.133 [161/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:06:09.133 [162/203] Compiling C object tools/xdd.p/xdd.c.o 00:06:09.393 [163/203] Compiling C object tools/lblk.p/lblk.c.o 00:06:09.393 [164/203] Compiling C object tools/kvs.p/kvs.c.o 00:06:09.393 [165/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:06:09.393 [166/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:06:09.393 [167/203] Compiling C object tools/zoned.p/zoned.c.o 00:06:09.393 [168/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:06:09.393 [169/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:06:09.393 [170/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:06:09.393 [171/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:06:09.393 [172/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:06:09.652 [173/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:06:09.652 [174/203] Linking static target lib/libxnvme.a 00:06:09.652 [175/203] Linking target tests/xnvme_tests_enum 00:06:09.652 [176/203] Linking target tests/xnvme_tests_buf 00:06:09.652 [177/203] Linking target tests/xnvme_tests_cli 00:06:09.652 [178/203] Linking target tests/xnvme_tests_async_intf 00:06:09.652 [179/203] Linking target tests/xnvme_tests_lblk 00:06:09.652 [180/203] Linking target tests/xnvme_tests_scc 00:06:09.652 [181/203] Linking target tests/xnvme_tests_xnvme_cli 00:06:09.652 [182/203] Linking target tests/xnvme_tests_xnvme_file 00:06:09.652 [183/203] Linking target tests/xnvme_tests_znd_append 00:06:09.652 [184/203] Linking target tests/xnvme_tests_znd_explicit_open 00:06:09.652 [185/203] Linking target tests/xnvme_tests_znd_state 00:06:09.652 [186/203] Linking target tests/xnvme_tests_ioworker 00:06:09.652 [187/203] Linking target tests/xnvme_tests_map 00:06:09.652 [188/203] Linking target tests/xnvme_tests_kvs 00:06:09.652 [189/203] Linking target tools/xdd 00:06:09.652 [190/203] Linking target tools/lblk 00:06:09.652 [191/203] Linking target examples/xnvme_hello 00:06:09.652 [192/203] Linking target examples/xnvme_dev 00:06:09.652 [193/203] Linking target tools/xnvme 00:06:09.652 [194/203] Linking target tests/xnvme_tests_znd_zrwa 00:06:09.911 [195/203] Linking target examples/xnvme_enum 00:06:09.911 [196/203] Linking target tools/xnvme_file 00:06:09.911 [197/203] Linking target tools/zoned 00:06:09.911 [198/203] Linking target examples/zoned_io_async 00:06:09.911 [199/203] Linking target examples/zoned_io_sync 00:06:09.911 [200/203] Linking target examples/xnvme_single_async 00:06:09.911 [201/203] Linking target examples/xnvme_io_async 00:06:09.911 [202/203] Linking target tools/kvs 00:06:09.911 [203/203] Linking target examples/xnvme_single_sync 00:06:09.911 INFO: autodetecting backend as ninja 00:06:09.911 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:06:09.911 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:06:18.065 The Meson build system 00:06:18.065 Version: 1.5.0 00:06:18.065 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:06:18.065 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:06:18.065 Build type: native build 00:06:18.065 Program cat found: YES (/usr/bin/cat) 00:06:18.065 Project name: DPDK 00:06:18.065 Project version: 24.03.0 00:06:18.065 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:06:18.065 C linker for the host machine: cc ld.bfd 2.40-14 00:06:18.065 Host machine cpu family: x86_64 00:06:18.065 Host machine cpu: x86_64 00:06:18.065 Message: ## Building in Developer Mode ## 00:06:18.065 Program pkg-config found: YES (/usr/bin/pkg-config) 00:06:18.065 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:06:18.065 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:06:18.065 Program python3 found: YES (/usr/bin/python3) 00:06:18.065 Program cat found: YES (/usr/bin/cat) 00:06:18.065 Compiler for C supports arguments -march=native: YES 00:06:18.065 Checking for size of "void *" : 8 00:06:18.065 Checking for size of "void *" : 8 (cached) 00:06:18.065 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:06:18.065 Library m found: YES 00:06:18.065 Library numa found: YES 00:06:18.065 Has header "numaif.h" : YES 00:06:18.065 Library fdt found: NO 00:06:18.065 Library execinfo found: NO 00:06:18.065 Has header "execinfo.h" : YES 00:06:18.065 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:06:18.065 Run-time dependency libarchive found: NO (tried pkgconfig) 00:06:18.065 Run-time dependency libbsd found: NO (tried pkgconfig) 00:06:18.065 Run-time dependency jansson found: NO (tried pkgconfig) 00:06:18.065 Run-time dependency openssl found: YES 3.1.1 00:06:18.065 Run-time dependency libpcap found: YES 1.10.4 00:06:18.065 Has header "pcap.h" with dependency libpcap: YES 00:06:18.066 Compiler for C supports arguments -Wcast-qual: YES 00:06:18.066 Compiler for C supports arguments -Wdeprecated: YES 00:06:18.066 Compiler for C supports arguments -Wformat: YES 00:06:18.066 Compiler for C supports arguments -Wformat-nonliteral: NO 00:06:18.066 Compiler for C supports arguments -Wformat-security: NO 00:06:18.066 Compiler for C supports arguments -Wmissing-declarations: YES 00:06:18.066 Compiler for C supports arguments -Wmissing-prototypes: YES 00:06:18.066 Compiler for C supports arguments -Wnested-externs: YES 00:06:18.066 Compiler for C supports arguments -Wold-style-definition: YES 00:06:18.066 Compiler for C supports arguments -Wpointer-arith: YES 00:06:18.066 Compiler for C supports arguments -Wsign-compare: YES 00:06:18.066 Compiler for C supports arguments -Wstrict-prototypes: YES 00:06:18.066 Compiler for C supports arguments -Wundef: YES 00:06:18.066 Compiler for C supports arguments -Wwrite-strings: YES 00:06:18.066 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:06:18.066 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:06:18.066 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:06:18.066 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:06:18.066 Program objdump found: YES (/usr/bin/objdump) 00:06:18.066 Compiler for C supports arguments -mavx512f: YES 00:06:18.066 Checking if "AVX512 checking" compiles: YES 00:06:18.066 Fetching value of define "__SSE4_2__" : 1 00:06:18.066 Fetching value of define "__AES__" : 1 00:06:18.066 Fetching value of define "__AVX__" : 1 00:06:18.066 Fetching value of define "__AVX2__" : 1 00:06:18.066 Fetching value of define "__AVX512BW__" : 1 00:06:18.066 Fetching value of define "__AVX512CD__" : 1 00:06:18.066 Fetching value of define "__AVX512DQ__" : 1 00:06:18.066 Fetching value of define "__AVX512F__" : 1 00:06:18.066 Fetching value of define "__AVX512VL__" : 1 00:06:18.066 Fetching value of define "__PCLMUL__" : 1 00:06:18.066 Fetching value of define "__RDRND__" : 1 00:06:18.066 Fetching value of define "__RDSEED__" : 1 00:06:18.066 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:06:18.066 Fetching value of define "__znver1__" : (undefined) 00:06:18.066 Fetching value of define "__znver2__" : (undefined) 00:06:18.066 Fetching value of define "__znver3__" : (undefined) 00:06:18.066 Fetching value of define "__znver4__" : (undefined) 00:06:18.066 Library asan found: YES 00:06:18.066 Compiler for C supports arguments -Wno-format-truncation: YES 00:06:18.066 Message: lib/log: Defining dependency "log" 00:06:18.066 Message: lib/kvargs: Defining dependency "kvargs" 00:06:18.066 Message: lib/telemetry: Defining dependency "telemetry" 00:06:18.066 Library rt found: YES 00:06:18.066 Checking for function "getentropy" : NO 00:06:18.066 Message: lib/eal: Defining dependency "eal" 00:06:18.066 Message: lib/ring: Defining dependency "ring" 00:06:18.066 Message: lib/rcu: Defining dependency "rcu" 00:06:18.066 Message: lib/mempool: Defining dependency "mempool" 00:06:18.066 Message: lib/mbuf: Defining dependency "mbuf" 00:06:18.066 Fetching value of define "__PCLMUL__" : 1 (cached) 00:06:18.066 Fetching value of define "__AVX512F__" : 1 (cached) 00:06:18.066 Fetching value of define "__AVX512BW__" : 1 (cached) 00:06:18.066 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:06:18.066 Fetching value of define "__AVX512VL__" : 1 (cached) 00:06:18.066 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:06:18.066 Compiler for C supports arguments -mpclmul: YES 00:06:18.066 Compiler for C supports arguments -maes: YES 00:06:18.066 Compiler for C supports arguments -mavx512f: YES (cached) 00:06:18.066 Compiler for C supports arguments -mavx512bw: YES 00:06:18.066 Compiler for C supports arguments -mavx512dq: YES 00:06:18.066 Compiler for C supports arguments -mavx512vl: YES 00:06:18.066 Compiler for C supports arguments -mvpclmulqdq: YES 00:06:18.066 Compiler for C supports arguments -mavx2: YES 00:06:18.066 Compiler for C supports arguments -mavx: YES 00:06:18.066 Message: lib/net: Defining dependency "net" 00:06:18.066 Message: lib/meter: Defining dependency "meter" 00:06:18.066 Message: lib/ethdev: Defining dependency "ethdev" 00:06:18.066 Message: lib/pci: Defining dependency "pci" 00:06:18.066 Message: lib/cmdline: Defining dependency "cmdline" 00:06:18.066 Message: lib/hash: Defining dependency "hash" 00:06:18.066 Message: lib/timer: Defining dependency "timer" 00:06:18.066 Message: lib/compressdev: Defining dependency "compressdev" 00:06:18.066 Message: lib/cryptodev: Defining dependency "cryptodev" 00:06:18.066 Message: lib/dmadev: Defining dependency "dmadev" 00:06:18.066 Compiler for C supports arguments -Wno-cast-qual: YES 00:06:18.066 Message: lib/power: Defining dependency "power" 00:06:18.066 Message: lib/reorder: Defining dependency "reorder" 00:06:18.066 Message: lib/security: Defining dependency "security" 00:06:18.066 Has header "linux/userfaultfd.h" : YES 00:06:18.066 Has header "linux/vduse.h" : YES 00:06:18.066 Message: lib/vhost: Defining dependency "vhost" 00:06:18.066 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:06:18.066 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:06:18.066 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:06:18.066 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:06:18.066 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:06:18.066 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:06:18.066 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:06:18.066 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:06:18.066 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:06:18.066 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:06:18.066 Program doxygen found: YES (/usr/local/bin/doxygen) 00:06:18.066 Configuring doxy-api-html.conf using configuration 00:06:18.066 Configuring doxy-api-man.conf using configuration 00:06:18.066 Program mandb found: YES (/usr/bin/mandb) 00:06:18.066 Program sphinx-build found: NO 00:06:18.066 Configuring rte_build_config.h using configuration 00:06:18.066 Message: 00:06:18.066 ================= 00:06:18.066 Applications Enabled 00:06:18.066 ================= 00:06:18.066 00:06:18.066 apps: 00:06:18.066 00:06:18.066 00:06:18.066 Message: 00:06:18.066 ================= 00:06:18.066 Libraries Enabled 00:06:18.066 ================= 00:06:18.066 00:06:18.066 libs: 00:06:18.066 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:06:18.066 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:06:18.066 cryptodev, dmadev, power, reorder, security, vhost, 00:06:18.066 00:06:18.066 Message: 00:06:18.066 =============== 00:06:18.066 Drivers Enabled 00:06:18.066 =============== 00:06:18.066 00:06:18.066 common: 00:06:18.066 00:06:18.066 bus: 00:06:18.066 pci, vdev, 00:06:18.066 mempool: 00:06:18.066 ring, 00:06:18.066 dma: 00:06:18.066 00:06:18.066 net: 00:06:18.066 00:06:18.066 crypto: 00:06:18.066 00:06:18.066 compress: 00:06:18.066 00:06:18.066 vdpa: 00:06:18.066 00:06:18.066 00:06:18.066 Message: 00:06:18.066 ================= 00:06:18.066 Content Skipped 00:06:18.066 ================= 00:06:18.066 00:06:18.066 apps: 00:06:18.066 dumpcap: explicitly disabled via build config 00:06:18.066 graph: explicitly disabled via build config 00:06:18.066 pdump: explicitly disabled via build config 00:06:18.066 proc-info: explicitly disabled via build config 00:06:18.066 test-acl: explicitly disabled via build config 00:06:18.066 test-bbdev: explicitly disabled via build config 00:06:18.066 test-cmdline: explicitly disabled via build config 00:06:18.066 test-compress-perf: explicitly disabled via build config 00:06:18.066 test-crypto-perf: explicitly disabled via build config 00:06:18.066 test-dma-perf: explicitly disabled via build config 00:06:18.066 test-eventdev: explicitly disabled via build config 00:06:18.066 test-fib: explicitly disabled via build config 00:06:18.066 test-flow-perf: explicitly disabled via build config 00:06:18.066 test-gpudev: explicitly disabled via build config 00:06:18.066 test-mldev: explicitly disabled via build config 00:06:18.066 test-pipeline: explicitly disabled via build config 00:06:18.066 test-pmd: explicitly disabled via build config 00:06:18.066 test-regex: explicitly disabled via build config 00:06:18.066 test-sad: explicitly disabled via build config 00:06:18.066 test-security-perf: explicitly disabled via build config 00:06:18.066 00:06:18.066 libs: 00:06:18.066 argparse: explicitly disabled via build config 00:06:18.066 metrics: explicitly disabled via build config 00:06:18.066 acl: explicitly disabled via build config 00:06:18.066 bbdev: explicitly disabled via build config 00:06:18.066 bitratestats: explicitly disabled via build config 00:06:18.066 bpf: explicitly disabled via build config 00:06:18.066 cfgfile: explicitly disabled via build config 00:06:18.066 distributor: explicitly disabled via build config 00:06:18.066 efd: explicitly disabled via build config 00:06:18.066 eventdev: explicitly disabled via build config 00:06:18.066 dispatcher: explicitly disabled via build config 00:06:18.066 gpudev: explicitly disabled via build config 00:06:18.066 gro: explicitly disabled via build config 00:06:18.066 gso: explicitly disabled via build config 00:06:18.066 ip_frag: explicitly disabled via build config 00:06:18.066 jobstats: explicitly disabled via build config 00:06:18.066 latencystats: explicitly disabled via build config 00:06:18.066 lpm: explicitly disabled via build config 00:06:18.066 member: explicitly disabled via build config 00:06:18.066 pcapng: explicitly disabled via build config 00:06:18.066 rawdev: explicitly disabled via build config 00:06:18.066 regexdev: explicitly disabled via build config 00:06:18.066 mldev: explicitly disabled via build config 00:06:18.066 rib: explicitly disabled via build config 00:06:18.066 sched: explicitly disabled via build config 00:06:18.066 stack: explicitly disabled via build config 00:06:18.066 ipsec: explicitly disabled via build config 00:06:18.066 pdcp: explicitly disabled via build config 00:06:18.066 fib: explicitly disabled via build config 00:06:18.066 port: explicitly disabled via build config 00:06:18.066 pdump: explicitly disabled via build config 00:06:18.066 table: explicitly disabled via build config 00:06:18.066 pipeline: explicitly disabled via build config 00:06:18.066 graph: explicitly disabled via build config 00:06:18.066 node: explicitly disabled via build config 00:06:18.066 00:06:18.066 drivers: 00:06:18.066 common/cpt: not in enabled drivers build config 00:06:18.066 common/dpaax: not in enabled drivers build config 00:06:18.066 common/iavf: not in enabled drivers build config 00:06:18.066 common/idpf: not in enabled drivers build config 00:06:18.066 common/ionic: not in enabled drivers build config 00:06:18.066 common/mvep: not in enabled drivers build config 00:06:18.067 common/octeontx: not in enabled drivers build config 00:06:18.067 bus/auxiliary: not in enabled drivers build config 00:06:18.067 bus/cdx: not in enabled drivers build config 00:06:18.067 bus/dpaa: not in enabled drivers build config 00:06:18.067 bus/fslmc: not in enabled drivers build config 00:06:18.067 bus/ifpga: not in enabled drivers build config 00:06:18.067 bus/platform: not in enabled drivers build config 00:06:18.067 bus/uacce: not in enabled drivers build config 00:06:18.067 bus/vmbus: not in enabled drivers build config 00:06:18.067 common/cnxk: not in enabled drivers build config 00:06:18.067 common/mlx5: not in enabled drivers build config 00:06:18.067 common/nfp: not in enabled drivers build config 00:06:18.067 common/nitrox: not in enabled drivers build config 00:06:18.067 common/qat: not in enabled drivers build config 00:06:18.067 common/sfc_efx: not in enabled drivers build config 00:06:18.067 mempool/bucket: not in enabled drivers build config 00:06:18.067 mempool/cnxk: not in enabled drivers build config 00:06:18.067 mempool/dpaa: not in enabled drivers build config 00:06:18.067 mempool/dpaa2: not in enabled drivers build config 00:06:18.067 mempool/octeontx: not in enabled drivers build config 00:06:18.067 mempool/stack: not in enabled drivers build config 00:06:18.067 dma/cnxk: not in enabled drivers build config 00:06:18.067 dma/dpaa: not in enabled drivers build config 00:06:18.067 dma/dpaa2: not in enabled drivers build config 00:06:18.067 dma/hisilicon: not in enabled drivers build config 00:06:18.067 dma/idxd: not in enabled drivers build config 00:06:18.067 dma/ioat: not in enabled drivers build config 00:06:18.067 dma/skeleton: not in enabled drivers build config 00:06:18.067 net/af_packet: not in enabled drivers build config 00:06:18.067 net/af_xdp: not in enabled drivers build config 00:06:18.067 net/ark: not in enabled drivers build config 00:06:18.067 net/atlantic: not in enabled drivers build config 00:06:18.067 net/avp: not in enabled drivers build config 00:06:18.067 net/axgbe: not in enabled drivers build config 00:06:18.067 net/bnx2x: not in enabled drivers build config 00:06:18.067 net/bnxt: not in enabled drivers build config 00:06:18.067 net/bonding: not in enabled drivers build config 00:06:18.067 net/cnxk: not in enabled drivers build config 00:06:18.067 net/cpfl: not in enabled drivers build config 00:06:18.067 net/cxgbe: not in enabled drivers build config 00:06:18.067 net/dpaa: not in enabled drivers build config 00:06:18.067 net/dpaa2: not in enabled drivers build config 00:06:18.067 net/e1000: not in enabled drivers build config 00:06:18.067 net/ena: not in enabled drivers build config 00:06:18.067 net/enetc: not in enabled drivers build config 00:06:18.067 net/enetfec: not in enabled drivers build config 00:06:18.067 net/enic: not in enabled drivers build config 00:06:18.067 net/failsafe: not in enabled drivers build config 00:06:18.067 net/fm10k: not in enabled drivers build config 00:06:18.067 net/gve: not in enabled drivers build config 00:06:18.067 net/hinic: not in enabled drivers build config 00:06:18.067 net/hns3: not in enabled drivers build config 00:06:18.067 net/i40e: not in enabled drivers build config 00:06:18.067 net/iavf: not in enabled drivers build config 00:06:18.067 net/ice: not in enabled drivers build config 00:06:18.067 net/idpf: not in enabled drivers build config 00:06:18.067 net/igc: not in enabled drivers build config 00:06:18.067 net/ionic: not in enabled drivers build config 00:06:18.067 net/ipn3ke: not in enabled drivers build config 00:06:18.067 net/ixgbe: not in enabled drivers build config 00:06:18.067 net/mana: not in enabled drivers build config 00:06:18.067 net/memif: not in enabled drivers build config 00:06:18.067 net/mlx4: not in enabled drivers build config 00:06:18.067 net/mlx5: not in enabled drivers build config 00:06:18.067 net/mvneta: not in enabled drivers build config 00:06:18.067 net/mvpp2: not in enabled drivers build config 00:06:18.067 net/netvsc: not in enabled drivers build config 00:06:18.067 net/nfb: not in enabled drivers build config 00:06:18.067 net/nfp: not in enabled drivers build config 00:06:18.067 net/ngbe: not in enabled drivers build config 00:06:18.067 net/null: not in enabled drivers build config 00:06:18.067 net/octeontx: not in enabled drivers build config 00:06:18.067 net/octeon_ep: not in enabled drivers build config 00:06:18.067 net/pcap: not in enabled drivers build config 00:06:18.067 net/pfe: not in enabled drivers build config 00:06:18.067 net/qede: not in enabled drivers build config 00:06:18.067 net/ring: not in enabled drivers build config 00:06:18.067 net/sfc: not in enabled drivers build config 00:06:18.067 net/softnic: not in enabled drivers build config 00:06:18.067 net/tap: not in enabled drivers build config 00:06:18.067 net/thunderx: not in enabled drivers build config 00:06:18.067 net/txgbe: not in enabled drivers build config 00:06:18.067 net/vdev_netvsc: not in enabled drivers build config 00:06:18.067 net/vhost: not in enabled drivers build config 00:06:18.067 net/virtio: not in enabled drivers build config 00:06:18.067 net/vmxnet3: not in enabled drivers build config 00:06:18.067 raw/*: missing internal dependency, "rawdev" 00:06:18.067 crypto/armv8: not in enabled drivers build config 00:06:18.067 crypto/bcmfs: not in enabled drivers build config 00:06:18.067 crypto/caam_jr: not in enabled drivers build config 00:06:18.067 crypto/ccp: not in enabled drivers build config 00:06:18.067 crypto/cnxk: not in enabled drivers build config 00:06:18.067 crypto/dpaa_sec: not in enabled drivers build config 00:06:18.067 crypto/dpaa2_sec: not in enabled drivers build config 00:06:18.067 crypto/ipsec_mb: not in enabled drivers build config 00:06:18.067 crypto/mlx5: not in enabled drivers build config 00:06:18.067 crypto/mvsam: not in enabled drivers build config 00:06:18.067 crypto/nitrox: not in enabled drivers build config 00:06:18.067 crypto/null: not in enabled drivers build config 00:06:18.067 crypto/octeontx: not in enabled drivers build config 00:06:18.067 crypto/openssl: not in enabled drivers build config 00:06:18.067 crypto/scheduler: not in enabled drivers build config 00:06:18.067 crypto/uadk: not in enabled drivers build config 00:06:18.067 crypto/virtio: not in enabled drivers build config 00:06:18.067 compress/isal: not in enabled drivers build config 00:06:18.067 compress/mlx5: not in enabled drivers build config 00:06:18.067 compress/nitrox: not in enabled drivers build config 00:06:18.067 compress/octeontx: not in enabled drivers build config 00:06:18.067 compress/zlib: not in enabled drivers build config 00:06:18.067 regex/*: missing internal dependency, "regexdev" 00:06:18.067 ml/*: missing internal dependency, "mldev" 00:06:18.067 vdpa/ifc: not in enabled drivers build config 00:06:18.067 vdpa/mlx5: not in enabled drivers build config 00:06:18.067 vdpa/nfp: not in enabled drivers build config 00:06:18.067 vdpa/sfc: not in enabled drivers build config 00:06:18.067 event/*: missing internal dependency, "eventdev" 00:06:18.067 baseband/*: missing internal dependency, "bbdev" 00:06:18.067 gpu/*: missing internal dependency, "gpudev" 00:06:18.067 00:06:18.067 00:06:18.067 Build targets in project: 85 00:06:18.067 00:06:18.067 DPDK 24.03.0 00:06:18.067 00:06:18.067 User defined options 00:06:18.067 buildtype : debug 00:06:18.067 default_library : shared 00:06:18.067 libdir : lib 00:06:18.067 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:06:18.067 b_sanitize : address 00:06:18.067 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:06:18.067 c_link_args : 00:06:18.067 cpu_instruction_set: native 00:06:18.067 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:06:18.067 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:06:18.067 enable_docs : false 00:06:18.067 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:06:18.067 enable_kmods : false 00:06:18.067 max_lcores : 128 00:06:18.067 tests : false 00:06:18.067 00:06:18.067 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:06:18.325 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:06:18.326 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:06:18.326 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:06:18.583 [3/268] Linking static target lib/librte_kvargs.a 00:06:18.583 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:06:18.583 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:06:18.583 [6/268] Linking static target lib/librte_log.a 00:06:18.841 [7/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:06:19.100 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:06:19.100 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:06:19.100 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:06:19.100 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:06:19.100 [12/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:06:19.100 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:06:19.100 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:06:19.100 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:06:19.358 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:06:19.358 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:06:19.358 [18/268] Linking static target lib/librte_telemetry.a 00:06:19.617 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:06:19.617 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:06:19.617 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:06:19.617 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:06:19.617 [23/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:06:19.876 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:06:19.876 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:06:19.876 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:06:19.876 [27/268] Linking target lib/librte_log.so.24.1 00:06:19.876 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:06:20.204 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:06:20.204 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:06:20.204 [31/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:06:20.204 [32/268] Linking target lib/librte_kvargs.so.24.1 00:06:20.204 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:06:20.204 [34/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:06:20.464 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:06:20.464 [36/268] Linking target lib/librte_telemetry.so.24.1 00:06:20.464 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:06:20.464 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:06:20.464 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:06:20.464 [40/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:06:20.464 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:06:20.464 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:06:20.723 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:06:20.723 [44/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:06:20.723 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:06:20.723 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:06:20.723 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:06:20.982 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:06:20.982 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:06:20.982 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:06:21.241 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:06:21.241 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:06:21.241 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:06:21.241 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:06:21.499 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:06:21.499 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:06:21.499 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:06:21.499 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:06:21.499 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:06:21.758 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:06:21.758 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:06:21.758 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:06:21.758 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:06:22.016 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:06:22.016 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:06:22.016 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:06:22.016 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:06:22.016 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:06:22.275 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:06:22.275 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:06:22.275 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:06:22.534 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:06:22.534 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:06:22.534 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:06:22.534 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:06:22.534 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:06:22.534 [77/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:06:22.534 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:06:22.534 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:06:22.792 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:06:22.792 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:06:22.792 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:06:23.053 [83/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:06:23.053 [84/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:06:23.053 [85/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:06:23.053 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:06:23.053 [87/268] Linking static target lib/librte_ring.a 00:06:23.053 [88/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:06:23.053 [89/268] Linking static target lib/librte_eal.a 00:06:23.053 [90/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:06:23.053 [91/268] Linking static target lib/librte_rcu.a 00:06:23.313 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:06:23.313 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:06:23.313 [94/268] Linking static target lib/librte_mempool.a 00:06:23.571 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:06:23.571 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:06:23.571 [97/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:06:23.572 [98/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:06:23.572 [99/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:06:23.830 [100/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:06:23.830 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:06:23.830 [102/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:06:23.830 [103/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:06:23.830 [104/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:06:24.089 [105/268] Linking static target lib/librte_mbuf.a 00:06:24.089 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:06:24.089 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:06:24.089 [108/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:06:24.089 [109/268] Linking static target lib/librte_net.a 00:06:24.347 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:06:24.347 [111/268] Linking static target lib/librte_meter.a 00:06:24.347 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:06:24.347 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:06:24.605 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:06:24.605 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:06:24.605 [116/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:06:24.605 [117/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:06:24.863 [118/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:06:24.863 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:06:24.863 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:06:25.121 [121/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:06:25.121 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:06:25.380 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:06:25.380 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:06:25.639 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:06:25.639 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:06:25.639 [127/268] Linking static target lib/librte_pci.a 00:06:25.639 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:06:25.639 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:06:25.897 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:06:25.897 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:06:25.897 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:06:25.897 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:06:25.897 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:06:25.897 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:06:25.897 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:06:25.897 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:06:26.157 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:06:26.157 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:06:26.157 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:06:26.157 [141/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:06:26.157 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:06:26.157 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:06:26.157 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:06:26.157 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:06:26.157 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:06:26.439 [147/268] Linking static target lib/librte_cmdline.a 00:06:26.439 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:06:26.439 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:06:26.710 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:06:26.710 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:06:26.710 [152/268] Linking static target lib/librte_timer.a 00:06:26.984 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:06:26.984 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:06:26.984 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:06:26.984 [156/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:06:27.242 [157/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:06:27.242 [158/268] Linking static target lib/librte_ethdev.a 00:06:27.242 [159/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:06:27.242 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:06:27.242 [161/268] Linking static target lib/librte_hash.a 00:06:27.501 [162/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:06:27.501 [163/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:06:27.501 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:06:27.501 [165/268] Linking static target lib/librte_compressdev.a 00:06:27.501 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:06:27.501 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:06:27.759 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:06:27.759 [169/268] Linking static target lib/librte_dmadev.a 00:06:28.018 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:06:28.019 [171/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:06:28.019 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:06:28.019 [173/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:06:28.277 [174/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:06:28.277 [175/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:06:28.536 [176/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:06:28.536 [177/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:06:28.536 [178/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:28.536 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:06:28.536 [180/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:06:28.795 [181/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:28.795 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:06:28.795 [183/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:06:28.795 [184/268] Linking static target lib/librte_cryptodev.a 00:06:29.052 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:06:29.052 [186/268] Linking static target lib/librte_power.a 00:06:29.052 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:06:29.052 [188/268] Linking static target lib/librte_reorder.a 00:06:29.052 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:06:29.052 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:06:29.311 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:06:29.311 [192/268] Linking static target lib/librte_security.a 00:06:29.571 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:06:29.571 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:06:29.830 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:06:30.397 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:06:30.397 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:06:30.397 [198/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:06:30.397 [199/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:06:30.397 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:06:30.397 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:06:30.680 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:06:30.680 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:06:30.938 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:06:30.938 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:06:30.938 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:06:30.938 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:06:30.938 [208/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:06:31.196 [209/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:06:31.196 [210/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:06:31.196 [211/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:06:31.455 [212/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:06:31.455 [213/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:06:31.455 [214/268] Linking static target drivers/librte_bus_pci.a 00:06:31.455 [215/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:06:31.455 [216/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:06:31.455 [217/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:06:31.455 [218/268] Linking static target drivers/librte_bus_vdev.a 00:06:31.455 [219/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:31.455 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:06:31.455 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:06:31.713 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:06:31.713 [223/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:06:31.713 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:06:31.713 [225/268] Linking static target drivers/librte_mempool_ring.a 00:06:31.971 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:31.971 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:06:33.347 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:06:36.654 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:06:36.654 [230/268] Linking target lib/librte_eal.so.24.1 00:06:36.654 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:06:36.654 [232/268] Linking target lib/librte_pci.so.24.1 00:06:36.654 [233/268] Linking target lib/librte_meter.so.24.1 00:06:36.654 [234/268] Linking target lib/librte_timer.so.24.1 00:06:36.654 [235/268] Linking target drivers/librte_bus_vdev.so.24.1 00:06:36.654 [236/268] Linking target lib/librte_dmadev.so.24.1 00:06:36.654 [237/268] Linking target lib/librte_ring.so.24.1 00:06:36.654 [238/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:06:36.654 [239/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:06:36.654 [240/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:06:36.912 [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:06:36.912 [242/268] Linking target drivers/librte_bus_pci.so.24.1 00:06:36.912 [243/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:06:36.912 [244/268] Linking target lib/librte_rcu.so.24.1 00:06:36.912 [245/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:06:36.912 [246/268] Linking target lib/librte_mempool.so.24.1 00:06:36.912 [247/268] Linking static target lib/librte_vhost.a 00:06:36.912 [248/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:36.912 [249/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:06:36.912 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:06:36.912 [251/268] Linking target lib/librte_mbuf.so.24.1 00:06:36.912 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:06:37.171 [253/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:06:37.171 [254/268] Linking target lib/librte_net.so.24.1 00:06:37.171 [255/268] Linking target lib/librte_compressdev.so.24.1 00:06:37.171 [256/268] Linking target lib/librte_cryptodev.so.24.1 00:06:37.171 [257/268] Linking target lib/librte_reorder.so.24.1 00:06:37.429 [258/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:06:37.429 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:06:37.429 [260/268] Linking target lib/librte_hash.so.24.1 00:06:37.429 [261/268] Linking target lib/librte_cmdline.so.24.1 00:06:37.429 [262/268] Linking target lib/librte_security.so.24.1 00:06:37.429 [263/268] Linking target lib/librte_ethdev.so.24.1 00:06:37.688 [264/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:06:37.688 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:06:37.688 [266/268] Linking target lib/librte_power.so.24.1 00:06:39.592 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:06:39.592 [268/268] Linking target lib/librte_vhost.so.24.1 00:06:39.592 INFO: autodetecting backend as ninja 00:06:39.592 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:06:57.678 CC lib/log/log.o 00:06:57.678 CC lib/log/log_flags.o 00:06:57.678 CC lib/log/log_deprecated.o 00:06:57.678 CC lib/ut/ut.o 00:06:57.678 CC lib/ut_mock/mock.o 00:06:57.678 LIB libspdk_ut.a 00:06:57.678 LIB libspdk_log.a 00:06:57.678 SO libspdk_ut.so.2.0 00:06:57.678 LIB libspdk_ut_mock.a 00:06:57.678 SO libspdk_log.so.7.0 00:06:57.678 SYMLINK libspdk_ut.so 00:06:57.678 SO libspdk_ut_mock.so.6.0 00:06:57.678 SYMLINK libspdk_log.so 00:06:57.678 SYMLINK libspdk_ut_mock.so 00:06:57.678 CC lib/ioat/ioat.o 00:06:57.678 CC lib/dma/dma.o 00:06:57.678 CC lib/util/base64.o 00:06:57.678 CC lib/util/bit_array.o 00:06:57.678 CC lib/util/crc32.o 00:06:57.678 CC lib/util/cpuset.o 00:06:57.678 CC lib/util/crc16.o 00:06:57.678 CC lib/util/crc32c.o 00:06:57.678 CXX lib/trace_parser/trace.o 00:06:57.678 CC lib/vfio_user/host/vfio_user_pci.o 00:06:57.678 CC lib/util/crc32_ieee.o 00:06:57.678 CC lib/util/crc64.o 00:06:57.678 CC lib/util/dif.o 00:06:57.678 CC lib/util/fd.o 00:06:57.678 CC lib/util/fd_group.o 00:06:57.678 LIB libspdk_dma.a 00:06:57.678 CC lib/util/file.o 00:06:57.678 SO libspdk_dma.so.5.0 00:06:57.678 CC lib/util/hexlify.o 00:06:57.678 CC lib/vfio_user/host/vfio_user.o 00:06:57.678 LIB libspdk_ioat.a 00:06:57.678 SYMLINK libspdk_dma.so 00:06:57.678 CC lib/util/iov.o 00:06:57.678 SO libspdk_ioat.so.7.0 00:06:57.678 CC lib/util/math.o 00:06:57.678 CC lib/util/net.o 00:06:57.678 CC lib/util/pipe.o 00:06:57.678 SYMLINK libspdk_ioat.so 00:06:57.678 CC lib/util/strerror_tls.o 00:06:57.678 CC lib/util/string.o 00:06:57.678 CC lib/util/uuid.o 00:06:57.678 CC lib/util/xor.o 00:06:57.678 LIB libspdk_vfio_user.a 00:06:57.678 CC lib/util/zipf.o 00:06:57.678 SO libspdk_vfio_user.so.5.0 00:06:57.678 CC lib/util/md5.o 00:06:57.936 SYMLINK libspdk_vfio_user.so 00:06:58.194 LIB libspdk_util.a 00:06:58.194 SO libspdk_util.so.10.0 00:06:58.451 SYMLINK libspdk_util.so 00:06:58.451 LIB libspdk_trace_parser.a 00:06:58.451 SO libspdk_trace_parser.so.6.0 00:06:58.710 SYMLINK libspdk_trace_parser.so 00:06:58.710 CC lib/env_dpdk/env.o 00:06:58.710 CC lib/env_dpdk/pci.o 00:06:58.710 CC lib/rdma_provider/common.o 00:06:58.710 CC lib/env_dpdk/init.o 00:06:58.710 CC lib/env_dpdk/memory.o 00:06:58.710 CC lib/json/json_parse.o 00:06:58.710 CC lib/rdma_utils/rdma_utils.o 00:06:58.710 CC lib/vmd/vmd.o 00:06:58.710 CC lib/conf/conf.o 00:06:58.710 CC lib/idxd/idxd.o 00:06:58.970 CC lib/rdma_provider/rdma_provider_verbs.o 00:06:58.970 LIB libspdk_conf.a 00:06:58.970 SO libspdk_conf.so.6.0 00:06:58.970 CC lib/json/json_util.o 00:06:58.970 LIB libspdk_rdma_utils.a 00:06:58.970 SYMLINK libspdk_conf.so 00:06:58.970 CC lib/idxd/idxd_user.o 00:06:58.970 SO libspdk_rdma_utils.so.1.0 00:06:58.970 SYMLINK libspdk_rdma_utils.so 00:06:58.970 CC lib/idxd/idxd_kernel.o 00:06:58.970 CC lib/env_dpdk/threads.o 00:06:58.970 LIB libspdk_rdma_provider.a 00:06:59.229 CC lib/env_dpdk/pci_ioat.o 00:06:59.229 SO libspdk_rdma_provider.so.6.0 00:06:59.229 SYMLINK libspdk_rdma_provider.so 00:06:59.229 CC lib/env_dpdk/pci_virtio.o 00:06:59.229 CC lib/env_dpdk/pci_vmd.o 00:06:59.229 CC lib/env_dpdk/pci_idxd.o 00:06:59.229 CC lib/vmd/led.o 00:06:59.229 CC lib/json/json_write.o 00:06:59.229 CC lib/env_dpdk/pci_event.o 00:06:59.488 CC lib/env_dpdk/sigbus_handler.o 00:06:59.488 CC lib/env_dpdk/pci_dpdk.o 00:06:59.488 CC lib/env_dpdk/pci_dpdk_2207.o 00:06:59.488 CC lib/env_dpdk/pci_dpdk_2211.o 00:06:59.488 LIB libspdk_idxd.a 00:06:59.488 LIB libspdk_vmd.a 00:06:59.488 SO libspdk_idxd.so.12.1 00:06:59.488 SO libspdk_vmd.so.6.0 00:06:59.488 SYMLINK libspdk_idxd.so 00:06:59.488 SYMLINK libspdk_vmd.so 00:06:59.746 LIB libspdk_json.a 00:06:59.746 SO libspdk_json.so.6.0 00:06:59.746 SYMLINK libspdk_json.so 00:07:00.313 CC lib/jsonrpc/jsonrpc_server.o 00:07:00.313 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:07:00.313 CC lib/jsonrpc/jsonrpc_client.o 00:07:00.313 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:07:00.572 LIB libspdk_env_dpdk.a 00:07:00.572 LIB libspdk_jsonrpc.a 00:07:00.572 SO libspdk_jsonrpc.so.6.0 00:07:00.572 SO libspdk_env_dpdk.so.15.0 00:07:00.830 SYMLINK libspdk_jsonrpc.so 00:07:00.830 SYMLINK libspdk_env_dpdk.so 00:07:01.088 CC lib/rpc/rpc.o 00:07:01.346 LIB libspdk_rpc.a 00:07:01.346 SO libspdk_rpc.so.6.0 00:07:01.346 SYMLINK libspdk_rpc.so 00:07:01.913 CC lib/notify/notify.o 00:07:01.913 CC lib/notify/notify_rpc.o 00:07:01.913 CC lib/trace/trace_rpc.o 00:07:01.913 CC lib/trace/trace.o 00:07:01.913 CC lib/trace/trace_flags.o 00:07:01.913 CC lib/keyring/keyring_rpc.o 00:07:01.913 CC lib/keyring/keyring.o 00:07:01.913 LIB libspdk_notify.a 00:07:01.913 SO libspdk_notify.so.6.0 00:07:02.172 LIB libspdk_trace.a 00:07:02.172 SYMLINK libspdk_notify.so 00:07:02.172 LIB libspdk_keyring.a 00:07:02.172 SO libspdk_trace.so.11.0 00:07:02.172 SO libspdk_keyring.so.2.0 00:07:02.172 SYMLINK libspdk_trace.so 00:07:02.172 SYMLINK libspdk_keyring.so 00:07:02.740 CC lib/thread/iobuf.o 00:07:02.740 CC lib/thread/thread.o 00:07:02.740 CC lib/sock/sock.o 00:07:02.740 CC lib/sock/sock_rpc.o 00:07:03.308 LIB libspdk_sock.a 00:07:03.308 SO libspdk_sock.so.10.0 00:07:03.308 SYMLINK libspdk_sock.so 00:07:03.875 CC lib/nvme/nvme_ctrlr_cmd.o 00:07:03.875 CC lib/nvme/nvme_ctrlr.o 00:07:03.875 CC lib/nvme/nvme_fabric.o 00:07:03.875 CC lib/nvme/nvme_ns_cmd.o 00:07:03.875 CC lib/nvme/nvme_ns.o 00:07:03.875 CC lib/nvme/nvme_pcie.o 00:07:03.875 CC lib/nvme/nvme_pcie_common.o 00:07:03.875 CC lib/nvme/nvme_qpair.o 00:07:03.875 CC lib/nvme/nvme.o 00:07:04.445 CC lib/nvme/nvme_quirks.o 00:07:04.445 CC lib/nvme/nvme_transport.o 00:07:04.445 CC lib/nvme/nvme_discovery.o 00:07:04.445 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:07:04.704 LIB libspdk_thread.a 00:07:04.704 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:07:04.704 SO libspdk_thread.so.10.2 00:07:04.704 CC lib/nvme/nvme_tcp.o 00:07:04.704 CC lib/nvme/nvme_opal.o 00:07:04.704 SYMLINK libspdk_thread.so 00:07:04.704 CC lib/nvme/nvme_io_msg.o 00:07:04.963 CC lib/nvme/nvme_poll_group.o 00:07:04.963 CC lib/nvme/nvme_zns.o 00:07:04.963 CC lib/nvme/nvme_stubs.o 00:07:04.963 CC lib/nvme/nvme_auth.o 00:07:05.221 CC lib/nvme/nvme_cuse.o 00:07:05.221 CC lib/nvme/nvme_rdma.o 00:07:05.480 CC lib/accel/accel.o 00:07:05.480 CC lib/accel/accel_rpc.o 00:07:05.480 CC lib/accel/accel_sw.o 00:07:05.480 CC lib/blob/blobstore.o 00:07:05.739 CC lib/blob/request.o 00:07:05.739 CC lib/init/json_config.o 00:07:05.999 CC lib/init/subsystem.o 00:07:05.999 CC lib/blob/zeroes.o 00:07:05.999 CC lib/init/subsystem_rpc.o 00:07:05.999 CC lib/virtio/virtio.o 00:07:06.258 CC lib/fsdev/fsdev.o 00:07:06.258 CC lib/fsdev/fsdev_io.o 00:07:06.258 CC lib/fsdev/fsdev_rpc.o 00:07:06.258 CC lib/blob/blob_bs_dev.o 00:07:06.258 CC lib/init/rpc.o 00:07:06.258 CC lib/virtio/virtio_vhost_user.o 00:07:06.518 CC lib/virtio/virtio_vfio_user.o 00:07:06.518 LIB libspdk_init.a 00:07:06.518 CC lib/virtio/virtio_pci.o 00:07:06.518 SO libspdk_init.so.6.0 00:07:06.518 SYMLINK libspdk_init.so 00:07:06.785 LIB libspdk_accel.a 00:07:06.785 CC lib/event/app.o 00:07:06.785 CC lib/event/log_rpc.o 00:07:06.785 CC lib/event/scheduler_static.o 00:07:06.785 CC lib/event/app_rpc.o 00:07:06.785 CC lib/event/reactor.o 00:07:06.785 LIB libspdk_virtio.a 00:07:06.785 SO libspdk_accel.so.16.0 00:07:06.785 LIB libspdk_nvme.a 00:07:06.785 SO libspdk_virtio.so.7.0 00:07:06.785 LIB libspdk_fsdev.a 00:07:07.045 SO libspdk_fsdev.so.1.0 00:07:07.045 SYMLINK libspdk_virtio.so 00:07:07.045 SYMLINK libspdk_accel.so 00:07:07.045 SYMLINK libspdk_fsdev.so 00:07:07.045 SO libspdk_nvme.so.14.0 00:07:07.304 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:07:07.304 CC lib/bdev/bdev_rpc.o 00:07:07.304 CC lib/bdev/scsi_nvme.o 00:07:07.304 CC lib/bdev/part.o 00:07:07.304 CC lib/bdev/bdev.o 00:07:07.304 CC lib/bdev/bdev_zone.o 00:07:07.304 LIB libspdk_event.a 00:07:07.562 SYMLINK libspdk_nvme.so 00:07:07.562 SO libspdk_event.so.15.0 00:07:07.562 SYMLINK libspdk_event.so 00:07:08.128 LIB libspdk_fuse_dispatcher.a 00:07:08.128 SO libspdk_fuse_dispatcher.so.1.0 00:07:08.128 SYMLINK libspdk_fuse_dispatcher.so 00:07:09.519 LIB libspdk_blob.a 00:07:09.519 SO libspdk_blob.so.11.0 00:07:09.519 SYMLINK libspdk_blob.so 00:07:10.091 CC lib/lvol/lvol.o 00:07:10.091 CC lib/blobfs/blobfs.o 00:07:10.091 CC lib/blobfs/tree.o 00:07:10.659 LIB libspdk_bdev.a 00:07:10.659 SO libspdk_bdev.so.17.0 00:07:10.943 SYMLINK libspdk_bdev.so 00:07:11.209 LIB libspdk_blobfs.a 00:07:11.209 SO libspdk_blobfs.so.10.0 00:07:11.209 CC lib/nvmf/ctrlr.o 00:07:11.209 CC lib/nvmf/ctrlr_discovery.o 00:07:11.209 CC lib/nbd/nbd.o 00:07:11.209 CC lib/ublk/ublk.o 00:07:11.210 CC lib/nvmf/subsystem.o 00:07:11.210 CC lib/nvmf/ctrlr_bdev.o 00:07:11.210 CC lib/ftl/ftl_core.o 00:07:11.210 CC lib/scsi/dev.o 00:07:11.210 LIB libspdk_lvol.a 00:07:11.210 SYMLINK libspdk_blobfs.so 00:07:11.210 CC lib/scsi/lun.o 00:07:11.210 SO libspdk_lvol.so.10.0 00:07:11.210 SYMLINK libspdk_lvol.so 00:07:11.210 CC lib/ublk/ublk_rpc.o 00:07:11.470 CC lib/scsi/port.o 00:07:11.470 CC lib/nbd/nbd_rpc.o 00:07:11.470 CC lib/scsi/scsi.o 00:07:11.470 CC lib/scsi/scsi_bdev.o 00:07:11.728 CC lib/ftl/ftl_init.o 00:07:11.728 CC lib/ftl/ftl_layout.o 00:07:11.728 CC lib/scsi/scsi_pr.o 00:07:11.728 LIB libspdk_nbd.a 00:07:11.728 CC lib/scsi/scsi_rpc.o 00:07:11.728 SO libspdk_nbd.so.7.0 00:07:11.728 SYMLINK libspdk_nbd.so 00:07:11.728 CC lib/scsi/task.o 00:07:11.728 CC lib/nvmf/nvmf.o 00:07:11.986 CC lib/nvmf/nvmf_rpc.o 00:07:11.986 CC lib/nvmf/transport.o 00:07:11.986 CC lib/ftl/ftl_debug.o 00:07:11.986 LIB libspdk_ublk.a 00:07:11.986 SO libspdk_ublk.so.3.0 00:07:11.986 CC lib/ftl/ftl_io.o 00:07:11.986 CC lib/nvmf/tcp.o 00:07:11.986 SYMLINK libspdk_ublk.so 00:07:11.986 CC lib/ftl/ftl_sb.o 00:07:12.246 LIB libspdk_scsi.a 00:07:12.246 CC lib/ftl/ftl_l2p.o 00:07:12.246 SO libspdk_scsi.so.9.0 00:07:12.246 CC lib/nvmf/stubs.o 00:07:12.246 CC lib/nvmf/mdns_server.o 00:07:12.246 SYMLINK libspdk_scsi.so 00:07:12.246 CC lib/nvmf/rdma.o 00:07:12.504 CC lib/ftl/ftl_l2p_flat.o 00:07:12.504 CC lib/nvmf/auth.o 00:07:12.763 CC lib/ftl/ftl_nv_cache.o 00:07:12.763 CC lib/ftl/ftl_band.o 00:07:13.035 CC lib/ftl/ftl_band_ops.o 00:07:13.035 CC lib/iscsi/conn.o 00:07:13.035 CC lib/ftl/ftl_writer.o 00:07:13.035 CC lib/vhost/vhost.o 00:07:13.311 CC lib/vhost/vhost_rpc.o 00:07:13.311 CC lib/ftl/ftl_rq.o 00:07:13.311 CC lib/ftl/ftl_reloc.o 00:07:13.311 CC lib/vhost/vhost_scsi.o 00:07:13.568 CC lib/ftl/ftl_l2p_cache.o 00:07:13.569 CC lib/ftl/ftl_p2l.o 00:07:13.569 CC lib/iscsi/init_grp.o 00:07:13.827 CC lib/iscsi/iscsi.o 00:07:13.827 CC lib/iscsi/param.o 00:07:13.827 CC lib/vhost/vhost_blk.o 00:07:13.827 CC lib/vhost/rte_vhost_user.o 00:07:13.827 CC lib/iscsi/portal_grp.o 00:07:14.086 CC lib/iscsi/tgt_node.o 00:07:14.086 CC lib/iscsi/iscsi_subsystem.o 00:07:14.086 CC lib/ftl/ftl_p2l_log.o 00:07:14.344 CC lib/iscsi/iscsi_rpc.o 00:07:14.344 CC lib/ftl/mngt/ftl_mngt.o 00:07:14.344 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:07:14.602 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:07:14.602 CC lib/iscsi/task.o 00:07:14.602 CC lib/ftl/mngt/ftl_mngt_startup.o 00:07:14.602 CC lib/ftl/mngt/ftl_mngt_md.o 00:07:14.602 CC lib/ftl/mngt/ftl_mngt_misc.o 00:07:14.861 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:07:14.861 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:07:14.861 CC lib/ftl/mngt/ftl_mngt_band.o 00:07:14.861 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:07:14.861 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:07:15.120 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:07:15.120 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:07:15.120 LIB libspdk_nvmf.a 00:07:15.120 CC lib/ftl/utils/ftl_conf.o 00:07:15.120 CC lib/ftl/utils/ftl_md.o 00:07:15.120 LIB libspdk_vhost.a 00:07:15.120 CC lib/ftl/utils/ftl_mempool.o 00:07:15.120 SO libspdk_nvmf.so.19.0 00:07:15.120 CC lib/ftl/utils/ftl_bitmap.o 00:07:15.120 SO libspdk_vhost.so.8.0 00:07:15.120 CC lib/ftl/utils/ftl_property.o 00:07:15.378 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:07:15.378 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:07:15.378 SYMLINK libspdk_vhost.so 00:07:15.378 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:07:15.378 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:07:15.378 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:07:15.378 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:07:15.378 SYMLINK libspdk_nvmf.so 00:07:15.378 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:07:15.637 CC lib/ftl/upgrade/ftl_sb_v3.o 00:07:15.637 CC lib/ftl/upgrade/ftl_sb_v5.o 00:07:15.637 CC lib/ftl/nvc/ftl_nvc_dev.o 00:07:15.637 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:07:15.637 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:07:15.637 LIB libspdk_iscsi.a 00:07:15.637 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:07:15.637 CC lib/ftl/base/ftl_base_dev.o 00:07:15.637 SO libspdk_iscsi.so.8.0 00:07:15.637 CC lib/ftl/base/ftl_base_bdev.o 00:07:15.637 CC lib/ftl/ftl_trace.o 00:07:15.897 SYMLINK libspdk_iscsi.so 00:07:15.897 LIB libspdk_ftl.a 00:07:16.157 SO libspdk_ftl.so.9.0 00:07:16.726 SYMLINK libspdk_ftl.so 00:07:16.984 CC module/env_dpdk/env_dpdk_rpc.o 00:07:17.242 CC module/accel/ioat/accel_ioat.o 00:07:17.242 CC module/blob/bdev/blob_bdev.o 00:07:17.242 CC module/accel/error/accel_error.o 00:07:17.242 CC module/accel/dsa/accel_dsa.o 00:07:17.243 CC module/accel/iaa/accel_iaa.o 00:07:17.243 CC module/fsdev/aio/fsdev_aio.o 00:07:17.243 CC module/sock/posix/posix.o 00:07:17.243 CC module/scheduler/dynamic/scheduler_dynamic.o 00:07:17.243 CC module/keyring/file/keyring.o 00:07:17.243 LIB libspdk_env_dpdk_rpc.a 00:07:17.243 SO libspdk_env_dpdk_rpc.so.6.0 00:07:17.243 SYMLINK libspdk_env_dpdk_rpc.so 00:07:17.243 CC module/accel/error/accel_error_rpc.o 00:07:17.243 CC module/keyring/file/keyring_rpc.o 00:07:17.243 CC module/accel/ioat/accel_ioat_rpc.o 00:07:17.243 CC module/accel/dsa/accel_dsa_rpc.o 00:07:17.243 LIB libspdk_scheduler_dynamic.a 00:07:17.502 CC module/accel/iaa/accel_iaa_rpc.o 00:07:17.502 SO libspdk_scheduler_dynamic.so.4.0 00:07:17.502 LIB libspdk_blob_bdev.a 00:07:17.502 LIB libspdk_accel_error.a 00:07:17.502 SYMLINK libspdk_scheduler_dynamic.so 00:07:17.502 LIB libspdk_keyring_file.a 00:07:17.502 SO libspdk_blob_bdev.so.11.0 00:07:17.502 LIB libspdk_accel_dsa.a 00:07:17.502 LIB libspdk_accel_ioat.a 00:07:17.502 SO libspdk_accel_error.so.2.0 00:07:17.502 SO libspdk_keyring_file.so.2.0 00:07:17.502 SO libspdk_accel_dsa.so.5.0 00:07:17.502 SO libspdk_accel_ioat.so.6.0 00:07:17.502 SYMLINK libspdk_blob_bdev.so 00:07:17.502 LIB libspdk_accel_iaa.a 00:07:17.502 SYMLINK libspdk_accel_error.so 00:07:17.502 SYMLINK libspdk_keyring_file.so 00:07:17.502 SO libspdk_accel_iaa.so.3.0 00:07:17.502 SYMLINK libspdk_accel_dsa.so 00:07:17.502 CC module/fsdev/aio/fsdev_aio_rpc.o 00:07:17.502 SYMLINK libspdk_accel_ioat.so 00:07:17.761 CC module/fsdev/aio/linux_aio_mgr.o 00:07:17.761 CC module/keyring/linux/keyring.o 00:07:17.761 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:07:17.761 SYMLINK libspdk_accel_iaa.so 00:07:17.761 CC module/keyring/linux/keyring_rpc.o 00:07:17.761 CC module/scheduler/gscheduler/gscheduler.o 00:07:17.761 LIB libspdk_scheduler_dpdk_governor.a 00:07:17.761 CC module/bdev/delay/vbdev_delay.o 00:07:17.761 CC module/bdev/error/vbdev_error.o 00:07:17.761 CC module/blobfs/bdev/blobfs_bdev.o 00:07:17.761 LIB libspdk_fsdev_aio.a 00:07:17.761 SO libspdk_scheduler_dpdk_governor.so.4.0 00:07:18.020 LIB libspdk_keyring_linux.a 00:07:18.020 SO libspdk_fsdev_aio.so.1.0 00:07:18.020 SO libspdk_keyring_linux.so.1.0 00:07:18.020 LIB libspdk_scheduler_gscheduler.a 00:07:18.020 SYMLINK libspdk_scheduler_dpdk_governor.so 00:07:18.020 CC module/bdev/delay/vbdev_delay_rpc.o 00:07:18.020 LIB libspdk_sock_posix.a 00:07:18.020 SO libspdk_scheduler_gscheduler.so.4.0 00:07:18.020 CC module/bdev/gpt/gpt.o 00:07:18.020 SYMLINK libspdk_keyring_linux.so 00:07:18.020 CC module/bdev/error/vbdev_error_rpc.o 00:07:18.020 SYMLINK libspdk_fsdev_aio.so 00:07:18.020 CC module/bdev/gpt/vbdev_gpt.o 00:07:18.020 SO libspdk_sock_posix.so.6.0 00:07:18.020 CC module/bdev/lvol/vbdev_lvol.o 00:07:18.020 SYMLINK libspdk_scheduler_gscheduler.so 00:07:18.020 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:07:18.020 SYMLINK libspdk_sock_posix.so 00:07:18.280 LIB libspdk_bdev_error.a 00:07:18.280 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:07:18.280 SO libspdk_bdev_error.so.6.0 00:07:18.280 CC module/bdev/malloc/bdev_malloc.o 00:07:18.280 LIB libspdk_bdev_delay.a 00:07:18.280 LIB libspdk_blobfs_bdev.a 00:07:18.280 CC module/bdev/null/bdev_null.o 00:07:18.280 SO libspdk_blobfs_bdev.so.6.0 00:07:18.280 LIB libspdk_bdev_gpt.a 00:07:18.280 SYMLINK libspdk_bdev_error.so 00:07:18.280 SO libspdk_bdev_delay.so.6.0 00:07:18.280 CC module/bdev/null/bdev_null_rpc.o 00:07:18.280 CC module/bdev/nvme/bdev_nvme.o 00:07:18.280 SO libspdk_bdev_gpt.so.6.0 00:07:18.280 CC module/bdev/passthru/vbdev_passthru.o 00:07:18.280 SYMLINK libspdk_blobfs_bdev.so 00:07:18.280 SYMLINK libspdk_bdev_delay.so 00:07:18.280 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:07:18.280 CC module/bdev/nvme/bdev_nvme_rpc.o 00:07:18.540 SYMLINK libspdk_bdev_gpt.so 00:07:18.540 CC module/bdev/malloc/bdev_malloc_rpc.o 00:07:18.540 CC module/bdev/nvme/nvme_rpc.o 00:07:18.540 LIB libspdk_bdev_lvol.a 00:07:18.540 LIB libspdk_bdev_null.a 00:07:18.540 SO libspdk_bdev_lvol.so.6.0 00:07:18.540 SO libspdk_bdev_null.so.6.0 00:07:18.799 LIB libspdk_bdev_malloc.a 00:07:18.799 LIB libspdk_bdev_passthru.a 00:07:18.799 SYMLINK libspdk_bdev_null.so 00:07:18.800 SYMLINK libspdk_bdev_lvol.so 00:07:18.800 SO libspdk_bdev_malloc.so.6.0 00:07:18.800 CC module/bdev/raid/bdev_raid.o 00:07:18.800 SO libspdk_bdev_passthru.so.6.0 00:07:18.800 CC module/bdev/nvme/bdev_mdns_client.o 00:07:18.800 CC module/bdev/split/vbdev_split.o 00:07:18.800 CC module/bdev/zone_block/vbdev_zone_block.o 00:07:18.800 SYMLINK libspdk_bdev_malloc.so 00:07:18.800 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:07:18.800 SYMLINK libspdk_bdev_passthru.so 00:07:18.800 CC module/bdev/split/vbdev_split_rpc.o 00:07:18.800 CC module/bdev/xnvme/bdev_xnvme.o 00:07:18.800 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:07:18.800 CC module/bdev/aio/bdev_aio.o 00:07:19.058 CC module/bdev/raid/bdev_raid_rpc.o 00:07:19.058 CC module/bdev/raid/bdev_raid_sb.o 00:07:19.058 LIB libspdk_bdev_split.a 00:07:19.058 SO libspdk_bdev_split.so.6.0 00:07:19.058 CC module/bdev/raid/raid0.o 00:07:19.058 CC module/bdev/raid/raid1.o 00:07:19.058 SYMLINK libspdk_bdev_split.so 00:07:19.058 LIB libspdk_bdev_zone_block.a 00:07:19.058 CC module/bdev/raid/concat.o 00:07:19.058 LIB libspdk_bdev_xnvme.a 00:07:19.318 SO libspdk_bdev_zone_block.so.6.0 00:07:19.318 SO libspdk_bdev_xnvme.so.3.0 00:07:19.318 SYMLINK libspdk_bdev_zone_block.so 00:07:19.318 SYMLINK libspdk_bdev_xnvme.so 00:07:19.318 CC module/bdev/nvme/vbdev_opal.o 00:07:19.318 CC module/bdev/nvme/vbdev_opal_rpc.o 00:07:19.318 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:07:19.318 CC module/bdev/aio/bdev_aio_rpc.o 00:07:19.577 LIB libspdk_bdev_aio.a 00:07:19.577 CC module/bdev/ftl/bdev_ftl.o 00:07:19.577 CC module/bdev/ftl/bdev_ftl_rpc.o 00:07:19.577 SO libspdk_bdev_aio.so.6.0 00:07:19.577 CC module/bdev/iscsi/bdev_iscsi.o 00:07:19.577 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:07:19.577 SYMLINK libspdk_bdev_aio.so 00:07:19.577 CC module/bdev/virtio/bdev_virtio_scsi.o 00:07:19.577 CC module/bdev/virtio/bdev_virtio_blk.o 00:07:19.577 CC module/bdev/virtio/bdev_virtio_rpc.o 00:07:19.835 LIB libspdk_bdev_ftl.a 00:07:19.835 LIB libspdk_bdev_raid.a 00:07:19.835 SO libspdk_bdev_ftl.so.6.0 00:07:20.093 SO libspdk_bdev_raid.so.6.0 00:07:20.093 SYMLINK libspdk_bdev_ftl.so 00:07:20.093 LIB libspdk_bdev_iscsi.a 00:07:20.093 SO libspdk_bdev_iscsi.so.6.0 00:07:20.093 SYMLINK libspdk_bdev_raid.so 00:07:20.093 SYMLINK libspdk_bdev_iscsi.so 00:07:20.093 LIB libspdk_bdev_virtio.a 00:07:20.352 SO libspdk_bdev_virtio.so.6.0 00:07:20.352 SYMLINK libspdk_bdev_virtio.so 00:07:21.290 LIB libspdk_bdev_nvme.a 00:07:21.290 SO libspdk_bdev_nvme.so.7.0 00:07:21.290 SYMLINK libspdk_bdev_nvme.so 00:07:22.226 CC module/event/subsystems/scheduler/scheduler.o 00:07:22.226 CC module/event/subsystems/iobuf/iobuf.o 00:07:22.226 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:07:22.226 CC module/event/subsystems/vmd/vmd.o 00:07:22.226 CC module/event/subsystems/vmd/vmd_rpc.o 00:07:22.226 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:07:22.226 CC module/event/subsystems/fsdev/fsdev.o 00:07:22.226 CC module/event/subsystems/sock/sock.o 00:07:22.226 CC module/event/subsystems/keyring/keyring.o 00:07:22.226 LIB libspdk_event_vmd.a 00:07:22.226 LIB libspdk_event_sock.a 00:07:22.226 LIB libspdk_event_fsdev.a 00:07:22.226 LIB libspdk_event_keyring.a 00:07:22.226 LIB libspdk_event_vhost_blk.a 00:07:22.226 LIB libspdk_event_iobuf.a 00:07:22.226 LIB libspdk_event_scheduler.a 00:07:22.226 SO libspdk_event_sock.so.5.0 00:07:22.226 SO libspdk_event_fsdev.so.1.0 00:07:22.226 SO libspdk_event_keyring.so.1.0 00:07:22.226 SO libspdk_event_vhost_blk.so.3.0 00:07:22.226 SO libspdk_event_vmd.so.6.0 00:07:22.226 SO libspdk_event_iobuf.so.3.0 00:07:22.226 SO libspdk_event_scheduler.so.4.0 00:07:22.226 SYMLINK libspdk_event_sock.so 00:07:22.226 SYMLINK libspdk_event_fsdev.so 00:07:22.226 SYMLINK libspdk_event_keyring.so 00:07:22.226 SYMLINK libspdk_event_vhost_blk.so 00:07:22.226 SYMLINK libspdk_event_iobuf.so 00:07:22.226 SYMLINK libspdk_event_scheduler.so 00:07:22.226 SYMLINK libspdk_event_vmd.so 00:07:22.793 CC module/event/subsystems/accel/accel.o 00:07:22.793 LIB libspdk_event_accel.a 00:07:22.793 SO libspdk_event_accel.so.6.0 00:07:23.052 SYMLINK libspdk_event_accel.so 00:07:23.311 CC module/event/subsystems/bdev/bdev.o 00:07:23.579 LIB libspdk_event_bdev.a 00:07:23.579 SO libspdk_event_bdev.so.6.0 00:07:23.579 SYMLINK libspdk_event_bdev.so 00:07:24.148 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:07:24.148 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:07:24.148 CC module/event/subsystems/scsi/scsi.o 00:07:24.148 CC module/event/subsystems/nbd/nbd.o 00:07:24.148 CC module/event/subsystems/ublk/ublk.o 00:07:24.148 LIB libspdk_event_scsi.a 00:07:24.148 LIB libspdk_event_ublk.a 00:07:24.148 SO libspdk_event_scsi.so.6.0 00:07:24.148 SO libspdk_event_ublk.so.3.0 00:07:24.148 LIB libspdk_event_nbd.a 00:07:24.406 LIB libspdk_event_nvmf.a 00:07:24.406 SYMLINK libspdk_event_scsi.so 00:07:24.406 SO libspdk_event_nbd.so.6.0 00:07:24.406 SYMLINK libspdk_event_ublk.so 00:07:24.406 SO libspdk_event_nvmf.so.6.0 00:07:24.406 SYMLINK libspdk_event_nbd.so 00:07:24.406 SYMLINK libspdk_event_nvmf.so 00:07:24.666 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:07:24.666 CC module/event/subsystems/iscsi/iscsi.o 00:07:24.933 LIB libspdk_event_vhost_scsi.a 00:07:24.933 LIB libspdk_event_iscsi.a 00:07:24.933 SO libspdk_event_vhost_scsi.so.3.0 00:07:24.933 SO libspdk_event_iscsi.so.6.0 00:07:24.933 SYMLINK libspdk_event_iscsi.so 00:07:24.933 SYMLINK libspdk_event_vhost_scsi.so 00:07:25.192 SO libspdk.so.6.0 00:07:25.192 SYMLINK libspdk.so 00:07:25.451 CC app/trace_record/trace_record.o 00:07:25.451 CXX app/trace/trace.o 00:07:25.451 TEST_HEADER include/spdk/accel.h 00:07:25.451 TEST_HEADER include/spdk/accel_module.h 00:07:25.451 TEST_HEADER include/spdk/assert.h 00:07:25.451 CC examples/interrupt_tgt/interrupt_tgt.o 00:07:25.451 TEST_HEADER include/spdk/barrier.h 00:07:25.451 TEST_HEADER include/spdk/base64.h 00:07:25.451 TEST_HEADER include/spdk/bdev.h 00:07:25.451 TEST_HEADER include/spdk/bdev_module.h 00:07:25.451 TEST_HEADER include/spdk/bdev_zone.h 00:07:25.451 TEST_HEADER include/spdk/bit_array.h 00:07:25.451 TEST_HEADER include/spdk/bit_pool.h 00:07:25.451 TEST_HEADER include/spdk/blob_bdev.h 00:07:25.451 TEST_HEADER include/spdk/blobfs_bdev.h 00:07:25.451 CC app/nvmf_tgt/nvmf_main.o 00:07:25.451 TEST_HEADER include/spdk/blobfs.h 00:07:25.451 TEST_HEADER include/spdk/blob.h 00:07:25.451 TEST_HEADER include/spdk/conf.h 00:07:25.451 TEST_HEADER include/spdk/config.h 00:07:25.451 TEST_HEADER include/spdk/cpuset.h 00:07:25.451 TEST_HEADER include/spdk/crc16.h 00:07:25.451 TEST_HEADER include/spdk/crc32.h 00:07:25.451 TEST_HEADER include/spdk/crc64.h 00:07:25.451 TEST_HEADER include/spdk/dif.h 00:07:25.451 TEST_HEADER include/spdk/dma.h 00:07:25.451 TEST_HEADER include/spdk/endian.h 00:07:25.451 TEST_HEADER include/spdk/env_dpdk.h 00:07:25.451 TEST_HEADER include/spdk/env.h 00:07:25.451 TEST_HEADER include/spdk/event.h 00:07:25.451 TEST_HEADER include/spdk/fd_group.h 00:07:25.451 TEST_HEADER include/spdk/fd.h 00:07:25.451 TEST_HEADER include/spdk/file.h 00:07:25.451 CC test/thread/poller_perf/poller_perf.o 00:07:25.451 TEST_HEADER include/spdk/fsdev.h 00:07:25.451 TEST_HEADER include/spdk/fsdev_module.h 00:07:25.451 CC examples/ioat/perf/perf.o 00:07:25.451 TEST_HEADER include/spdk/ftl.h 00:07:25.451 TEST_HEADER include/spdk/fuse_dispatcher.h 00:07:25.711 TEST_HEADER include/spdk/gpt_spec.h 00:07:25.711 TEST_HEADER include/spdk/hexlify.h 00:07:25.711 TEST_HEADER include/spdk/histogram_data.h 00:07:25.711 TEST_HEADER include/spdk/idxd.h 00:07:25.711 CC examples/util/zipf/zipf.o 00:07:25.711 TEST_HEADER include/spdk/idxd_spec.h 00:07:25.711 TEST_HEADER include/spdk/init.h 00:07:25.711 TEST_HEADER include/spdk/ioat.h 00:07:25.711 TEST_HEADER include/spdk/ioat_spec.h 00:07:25.711 TEST_HEADER include/spdk/iscsi_spec.h 00:07:25.711 TEST_HEADER include/spdk/json.h 00:07:25.711 TEST_HEADER include/spdk/jsonrpc.h 00:07:25.711 TEST_HEADER include/spdk/keyring.h 00:07:25.711 CC test/app/bdev_svc/bdev_svc.o 00:07:25.711 TEST_HEADER include/spdk/keyring_module.h 00:07:25.711 TEST_HEADER include/spdk/likely.h 00:07:25.711 TEST_HEADER include/spdk/log.h 00:07:25.711 TEST_HEADER include/spdk/lvol.h 00:07:25.711 TEST_HEADER include/spdk/md5.h 00:07:25.711 CC test/dma/test_dma/test_dma.o 00:07:25.711 TEST_HEADER include/spdk/memory.h 00:07:25.711 TEST_HEADER include/spdk/mmio.h 00:07:25.711 TEST_HEADER include/spdk/nbd.h 00:07:25.711 TEST_HEADER include/spdk/net.h 00:07:25.711 TEST_HEADER include/spdk/notify.h 00:07:25.711 TEST_HEADER include/spdk/nvme.h 00:07:25.711 TEST_HEADER include/spdk/nvme_intel.h 00:07:25.711 TEST_HEADER include/spdk/nvme_ocssd.h 00:07:25.711 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:07:25.711 TEST_HEADER include/spdk/nvme_spec.h 00:07:25.711 TEST_HEADER include/spdk/nvme_zns.h 00:07:25.711 TEST_HEADER include/spdk/nvmf_cmd.h 00:07:25.711 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:07:25.711 TEST_HEADER include/spdk/nvmf.h 00:07:25.711 TEST_HEADER include/spdk/nvmf_spec.h 00:07:25.711 TEST_HEADER include/spdk/nvmf_transport.h 00:07:25.711 TEST_HEADER include/spdk/opal.h 00:07:25.711 TEST_HEADER include/spdk/opal_spec.h 00:07:25.711 TEST_HEADER include/spdk/pci_ids.h 00:07:25.711 TEST_HEADER include/spdk/pipe.h 00:07:25.711 TEST_HEADER include/spdk/queue.h 00:07:25.711 TEST_HEADER include/spdk/reduce.h 00:07:25.711 TEST_HEADER include/spdk/rpc.h 00:07:25.711 TEST_HEADER include/spdk/scheduler.h 00:07:25.711 TEST_HEADER include/spdk/scsi.h 00:07:25.711 TEST_HEADER include/spdk/scsi_spec.h 00:07:25.711 TEST_HEADER include/spdk/sock.h 00:07:25.711 TEST_HEADER include/spdk/stdinc.h 00:07:25.711 TEST_HEADER include/spdk/string.h 00:07:25.711 TEST_HEADER include/spdk/thread.h 00:07:25.711 TEST_HEADER include/spdk/trace.h 00:07:25.711 TEST_HEADER include/spdk/trace_parser.h 00:07:25.711 TEST_HEADER include/spdk/tree.h 00:07:25.711 TEST_HEADER include/spdk/ublk.h 00:07:25.711 TEST_HEADER include/spdk/util.h 00:07:25.711 TEST_HEADER include/spdk/uuid.h 00:07:25.711 TEST_HEADER include/spdk/version.h 00:07:25.711 TEST_HEADER include/spdk/vfio_user_pci.h 00:07:25.711 LINK interrupt_tgt 00:07:25.711 TEST_HEADER include/spdk/vfio_user_spec.h 00:07:25.711 TEST_HEADER include/spdk/vhost.h 00:07:25.711 TEST_HEADER include/spdk/vmd.h 00:07:25.711 LINK nvmf_tgt 00:07:25.711 TEST_HEADER include/spdk/xor.h 00:07:25.711 TEST_HEADER include/spdk/zipf.h 00:07:25.711 CXX test/cpp_headers/accel.o 00:07:25.711 LINK poller_perf 00:07:25.711 LINK spdk_trace_record 00:07:25.711 LINK zipf 00:07:25.711 LINK bdev_svc 00:07:25.711 LINK ioat_perf 00:07:25.970 CXX test/cpp_headers/accel_module.o 00:07:25.970 LINK spdk_trace 00:07:25.970 CC app/iscsi_tgt/iscsi_tgt.o 00:07:25.970 CC test/app/histogram_perf/histogram_perf.o 00:07:25.970 CC test/app/jsoncat/jsoncat.o 00:07:25.970 CXX test/cpp_headers/assert.o 00:07:25.970 CC app/spdk_tgt/spdk_tgt.o 00:07:26.229 CC examples/ioat/verify/verify.o 00:07:26.229 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:07:26.229 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:07:26.229 CC test/app/stub/stub.o 00:07:26.229 LINK test_dma 00:07:26.229 LINK histogram_perf 00:07:26.229 LINK jsoncat 00:07:26.229 LINK iscsi_tgt 00:07:26.229 CXX test/cpp_headers/barrier.o 00:07:26.229 LINK spdk_tgt 00:07:26.229 LINK verify 00:07:26.488 CXX test/cpp_headers/base64.o 00:07:26.488 LINK stub 00:07:26.488 CXX test/cpp_headers/bdev.o 00:07:26.488 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:07:26.488 CC app/spdk_lspci/spdk_lspci.o 00:07:26.488 CXX test/cpp_headers/bdev_module.o 00:07:26.488 CC app/spdk_nvme_perf/perf.o 00:07:26.748 LINK nvme_fuzz 00:07:26.748 CC app/spdk_nvme_identify/identify.o 00:07:26.748 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:07:26.748 CC app/spdk_nvme_discover/discovery_aer.o 00:07:26.748 LINK spdk_lspci 00:07:26.748 CC examples/thread/thread/thread_ex.o 00:07:26.748 CC examples/sock/hello_world/hello_sock.o 00:07:26.748 CXX test/cpp_headers/bdev_zone.o 00:07:27.007 LINK spdk_nvme_discover 00:07:27.007 CC app/spdk_top/spdk_top.o 00:07:27.008 CXX test/cpp_headers/bit_array.o 00:07:27.008 LINK hello_sock 00:07:27.008 LINK thread 00:07:27.008 CC app/vhost/vhost.o 00:07:27.266 LINK vhost_fuzz 00:07:27.266 CXX test/cpp_headers/bit_pool.o 00:07:27.266 CC app/spdk_dd/spdk_dd.o 00:07:27.266 CXX test/cpp_headers/blob_bdev.o 00:07:27.266 LINK vhost 00:07:27.266 CXX test/cpp_headers/blobfs_bdev.o 00:07:27.525 CXX test/cpp_headers/blobfs.o 00:07:27.525 CC examples/vmd/lsvmd/lsvmd.o 00:07:27.526 CC app/fio/nvme/fio_plugin.o 00:07:27.526 LINK spdk_nvme_perf 00:07:27.526 LINK spdk_dd 00:07:27.526 CXX test/cpp_headers/blob.o 00:07:27.526 LINK lsvmd 00:07:27.785 LINK spdk_nvme_identify 00:07:27.785 CC examples/idxd/perf/perf.o 00:07:27.785 CC examples/fsdev/hello_world/hello_fsdev.o 00:07:27.785 CXX test/cpp_headers/conf.o 00:07:27.785 CC app/fio/bdev/fio_plugin.o 00:07:27.785 CC examples/vmd/led/led.o 00:07:28.045 LINK spdk_top 00:07:28.045 CXX test/cpp_headers/config.o 00:07:28.045 CXX test/cpp_headers/cpuset.o 00:07:28.045 CC examples/accel/perf/accel_perf.o 00:07:28.045 LINK hello_fsdev 00:07:28.045 LINK led 00:07:28.045 LINK idxd_perf 00:07:28.045 CC examples/blob/hello_world/hello_blob.o 00:07:28.045 LINK iscsi_fuzz 00:07:28.045 LINK spdk_nvme 00:07:28.304 CXX test/cpp_headers/crc16.o 00:07:28.304 CXX test/cpp_headers/crc32.o 00:07:28.304 CC examples/blob/cli/blobcli.o 00:07:28.304 CXX test/cpp_headers/crc64.o 00:07:28.304 LINK hello_blob 00:07:28.304 CXX test/cpp_headers/dif.o 00:07:28.304 LINK spdk_bdev 00:07:28.563 CC test/event/event_perf/event_perf.o 00:07:28.563 CC examples/nvme/hello_world/hello_world.o 00:07:28.563 CC test/env/mem_callbacks/mem_callbacks.o 00:07:28.563 CXX test/cpp_headers/dma.o 00:07:28.563 LINK accel_perf 00:07:28.563 CC test/nvme/aer/aer.o 00:07:28.563 LINK event_perf 00:07:28.563 CC examples/nvme/reconnect/reconnect.o 00:07:28.822 CC examples/nvme/nvme_manage/nvme_manage.o 00:07:28.822 CC examples/nvme/arbitration/arbitration.o 00:07:28.822 LINK hello_world 00:07:28.822 CXX test/cpp_headers/endian.o 00:07:28.822 CXX test/cpp_headers/env_dpdk.o 00:07:28.822 LINK blobcli 00:07:28.822 CC test/event/reactor/reactor.o 00:07:29.117 LINK aer 00:07:29.117 CXX test/cpp_headers/env.o 00:07:29.117 CC test/event/reactor_perf/reactor_perf.o 00:07:29.117 CC test/event/app_repeat/app_repeat.o 00:07:29.117 CXX test/cpp_headers/event.o 00:07:29.117 LINK reconnect 00:07:29.117 LINK arbitration 00:07:29.117 LINK reactor 00:07:29.117 LINK mem_callbacks 00:07:29.117 LINK reactor_perf 00:07:29.117 LINK app_repeat 00:07:29.117 CXX test/cpp_headers/fd_group.o 00:07:29.407 CXX test/cpp_headers/fd.o 00:07:29.407 CC test/nvme/reset/reset.o 00:07:29.407 LINK nvme_manage 00:07:29.407 CC test/event/scheduler/scheduler.o 00:07:29.407 CC test/env/vtophys/vtophys.o 00:07:29.407 CC examples/nvme/hotplug/hotplug.o 00:07:29.407 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:07:29.407 CXX test/cpp_headers/file.o 00:07:29.407 CC examples/nvme/cmb_copy/cmb_copy.o 00:07:29.407 CC examples/nvme/abort/abort.o 00:07:29.407 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:07:29.407 LINK vtophys 00:07:29.666 LINK scheduler 00:07:29.666 LINK env_dpdk_post_init 00:07:29.666 LINK reset 00:07:29.666 CC test/rpc_client/rpc_client_test.o 00:07:29.666 CXX test/cpp_headers/fsdev.o 00:07:29.666 LINK cmb_copy 00:07:29.666 LINK hotplug 00:07:29.666 CXX test/cpp_headers/fsdev_module.o 00:07:29.666 LINK pmr_persistence 00:07:29.666 LINK rpc_client_test 00:07:29.925 CC test/nvme/sgl/sgl.o 00:07:29.925 CC test/env/memory/memory_ut.o 00:07:29.925 CC test/nvme/e2edp/nvme_dp.o 00:07:29.925 CC test/env/pci/pci_ut.o 00:07:29.925 CXX test/cpp_headers/ftl.o 00:07:29.925 LINK abort 00:07:29.925 CC test/nvme/overhead/overhead.o 00:07:29.925 CC test/nvme/err_injection/err_injection.o 00:07:29.925 CC test/nvme/startup/startup.o 00:07:30.184 CC test/nvme/reserve/reserve.o 00:07:30.184 CXX test/cpp_headers/fuse_dispatcher.o 00:07:30.184 LINK err_injection 00:07:30.184 LINK sgl 00:07:30.184 LINK startup 00:07:30.184 LINK nvme_dp 00:07:30.184 LINK overhead 00:07:30.184 CXX test/cpp_headers/gpt_spec.o 00:07:30.443 LINK reserve 00:07:30.443 CC examples/bdev/hello_world/hello_bdev.o 00:07:30.443 LINK pci_ut 00:07:30.443 CC test/nvme/simple_copy/simple_copy.o 00:07:30.443 CC test/nvme/connect_stress/connect_stress.o 00:07:30.443 CXX test/cpp_headers/hexlify.o 00:07:30.443 CC test/nvme/boot_partition/boot_partition.o 00:07:30.443 CXX test/cpp_headers/histogram_data.o 00:07:30.443 CC test/nvme/compliance/nvme_compliance.o 00:07:30.703 LINK hello_bdev 00:07:30.703 CXX test/cpp_headers/idxd.o 00:07:30.703 LINK boot_partition 00:07:30.703 LINK connect_stress 00:07:30.703 CC test/accel/dif/dif.o 00:07:30.703 LINK simple_copy 00:07:30.703 CC examples/bdev/bdevperf/bdevperf.o 00:07:30.703 CXX test/cpp_headers/idxd_spec.o 00:07:30.962 CXX test/cpp_headers/init.o 00:07:30.962 CXX test/cpp_headers/ioat.o 00:07:30.962 CXX test/cpp_headers/ioat_spec.o 00:07:30.962 LINK nvme_compliance 00:07:30.962 CXX test/cpp_headers/iscsi_spec.o 00:07:30.962 CC test/blobfs/mkfs/mkfs.o 00:07:30.962 CXX test/cpp_headers/json.o 00:07:30.962 CXX test/cpp_headers/jsonrpc.o 00:07:31.221 CC test/nvme/fused_ordering/fused_ordering.o 00:07:31.222 CXX test/cpp_headers/keyring.o 00:07:31.222 LINK memory_ut 00:07:31.222 LINK mkfs 00:07:31.222 CC test/nvme/doorbell_aers/doorbell_aers.o 00:07:31.222 CXX test/cpp_headers/keyring_module.o 00:07:31.222 CXX test/cpp_headers/likely.o 00:07:31.222 CXX test/cpp_headers/log.o 00:07:31.222 CC test/lvol/esnap/esnap.o 00:07:31.222 LINK fused_ordering 00:07:31.480 LINK doorbell_aers 00:07:31.480 CXX test/cpp_headers/lvol.o 00:07:31.480 CXX test/cpp_headers/md5.o 00:07:31.480 CXX test/cpp_headers/memory.o 00:07:31.480 LINK dif 00:07:31.480 CXX test/cpp_headers/mmio.o 00:07:31.480 CC test/nvme/fdp/fdp.o 00:07:31.480 CC test/nvme/cuse/cuse.o 00:07:31.480 CXX test/cpp_headers/nbd.o 00:07:31.480 CXX test/cpp_headers/net.o 00:07:31.480 CXX test/cpp_headers/notify.o 00:07:31.480 CXX test/cpp_headers/nvme.o 00:07:31.739 CXX test/cpp_headers/nvme_intel.o 00:07:31.739 CXX test/cpp_headers/nvme_ocssd.o 00:07:31.739 CXX test/cpp_headers/nvme_ocssd_spec.o 00:07:31.739 LINK bdevperf 00:07:31.739 CXX test/cpp_headers/nvme_spec.o 00:07:31.739 CXX test/cpp_headers/nvme_zns.o 00:07:31.739 CXX test/cpp_headers/nvmf_cmd.o 00:07:31.739 CXX test/cpp_headers/nvmf_fc_spec.o 00:07:31.997 CXX test/cpp_headers/nvmf.o 00:07:31.997 LINK fdp 00:07:31.997 CXX test/cpp_headers/nvmf_spec.o 00:07:31.997 CC test/bdev/bdevio/bdevio.o 00:07:31.997 CXX test/cpp_headers/nvmf_transport.o 00:07:31.997 CXX test/cpp_headers/opal.o 00:07:31.997 CXX test/cpp_headers/opal_spec.o 00:07:31.997 CXX test/cpp_headers/pci_ids.o 00:07:31.997 CXX test/cpp_headers/pipe.o 00:07:32.257 CC examples/nvmf/nvmf/nvmf.o 00:07:32.257 CXX test/cpp_headers/queue.o 00:07:32.257 CXX test/cpp_headers/reduce.o 00:07:32.257 CXX test/cpp_headers/rpc.o 00:07:32.257 CXX test/cpp_headers/scheduler.o 00:07:32.257 CXX test/cpp_headers/scsi.o 00:07:32.257 CXX test/cpp_headers/scsi_spec.o 00:07:32.257 CXX test/cpp_headers/sock.o 00:07:32.257 CXX test/cpp_headers/stdinc.o 00:07:32.516 CXX test/cpp_headers/string.o 00:07:32.516 CXX test/cpp_headers/thread.o 00:07:32.516 LINK bdevio 00:07:32.516 CXX test/cpp_headers/trace.o 00:07:32.516 CXX test/cpp_headers/trace_parser.o 00:07:32.516 CXX test/cpp_headers/tree.o 00:07:32.516 LINK nvmf 00:07:32.516 CXX test/cpp_headers/ublk.o 00:07:32.516 CXX test/cpp_headers/util.o 00:07:32.516 CXX test/cpp_headers/uuid.o 00:07:32.516 CXX test/cpp_headers/version.o 00:07:32.516 CXX test/cpp_headers/vfio_user_pci.o 00:07:32.516 CXX test/cpp_headers/vfio_user_spec.o 00:07:32.783 CXX test/cpp_headers/vhost.o 00:07:32.783 CXX test/cpp_headers/vmd.o 00:07:32.783 CXX test/cpp_headers/xor.o 00:07:32.783 CXX test/cpp_headers/zipf.o 00:07:33.056 LINK cuse 00:07:38.326 LINK esnap 00:07:38.326 00:07:38.326 real 1m34.790s 00:07:38.326 user 8m10.225s 00:07:38.326 sys 2m11.326s 00:07:38.326 11:18:19 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:07:38.326 ************************************ 00:07:38.326 END TEST make 00:07:38.326 11:18:19 make -- common/autotest_common.sh@10 -- $ set +x 00:07:38.326 ************************************ 00:07:38.326 11:18:19 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:07:38.326 11:18:19 -- pm/common@29 -- $ signal_monitor_resources TERM 00:07:38.326 11:18:19 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:07:38.326 11:18:19 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:38.326 11:18:19 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:07:38.326 11:18:19 -- pm/common@44 -- $ pid=5274 00:07:38.326 11:18:19 -- pm/common@50 -- $ kill -TERM 5274 00:07:38.326 11:18:19 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:38.326 11:18:19 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:07:38.326 11:18:19 -- pm/common@44 -- $ pid=5276 00:07:38.326 11:18:19 -- pm/common@50 -- $ kill -TERM 5276 00:07:38.326 11:18:19 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:38.326 11:18:19 -- common/autotest_common.sh@1681 -- # lcov --version 00:07:38.326 11:18:19 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:38.326 11:18:19 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:38.326 11:18:19 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:38.326 11:18:19 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:38.326 11:18:19 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:38.326 11:18:19 -- scripts/common.sh@336 -- # IFS=.-: 00:07:38.326 11:18:19 -- scripts/common.sh@336 -- # read -ra ver1 00:07:38.326 11:18:19 -- scripts/common.sh@337 -- # IFS=.-: 00:07:38.326 11:18:19 -- scripts/common.sh@337 -- # read -ra ver2 00:07:38.326 11:18:19 -- scripts/common.sh@338 -- # local 'op=<' 00:07:38.326 11:18:19 -- scripts/common.sh@340 -- # ver1_l=2 00:07:38.326 11:18:19 -- scripts/common.sh@341 -- # ver2_l=1 00:07:38.326 11:18:19 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:38.326 11:18:19 -- scripts/common.sh@344 -- # case "$op" in 00:07:38.326 11:18:19 -- scripts/common.sh@345 -- # : 1 00:07:38.326 11:18:19 -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:38.326 11:18:19 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:38.326 11:18:19 -- scripts/common.sh@365 -- # decimal 1 00:07:38.326 11:18:19 -- scripts/common.sh@353 -- # local d=1 00:07:38.326 11:18:19 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:38.326 11:18:19 -- scripts/common.sh@355 -- # echo 1 00:07:38.326 11:18:19 -- scripts/common.sh@365 -- # ver1[v]=1 00:07:38.326 11:18:19 -- scripts/common.sh@366 -- # decimal 2 00:07:38.326 11:18:19 -- scripts/common.sh@353 -- # local d=2 00:07:38.326 11:18:19 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:38.326 11:18:19 -- scripts/common.sh@355 -- # echo 2 00:07:38.326 11:18:19 -- scripts/common.sh@366 -- # ver2[v]=2 00:07:38.326 11:18:19 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:38.326 11:18:19 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:38.326 11:18:19 -- scripts/common.sh@368 -- # return 0 00:07:38.326 11:18:19 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:38.326 11:18:19 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:38.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.326 --rc genhtml_branch_coverage=1 00:07:38.326 --rc genhtml_function_coverage=1 00:07:38.326 --rc genhtml_legend=1 00:07:38.326 --rc geninfo_all_blocks=1 00:07:38.326 --rc geninfo_unexecuted_blocks=1 00:07:38.326 00:07:38.326 ' 00:07:38.326 11:18:19 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:38.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.326 --rc genhtml_branch_coverage=1 00:07:38.326 --rc genhtml_function_coverage=1 00:07:38.326 --rc genhtml_legend=1 00:07:38.326 --rc geninfo_all_blocks=1 00:07:38.326 --rc geninfo_unexecuted_blocks=1 00:07:38.326 00:07:38.326 ' 00:07:38.326 11:18:19 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:38.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.326 --rc genhtml_branch_coverage=1 00:07:38.326 --rc genhtml_function_coverage=1 00:07:38.326 --rc genhtml_legend=1 00:07:38.326 --rc geninfo_all_blocks=1 00:07:38.326 --rc geninfo_unexecuted_blocks=1 00:07:38.326 00:07:38.326 ' 00:07:38.326 11:18:19 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:38.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.326 --rc genhtml_branch_coverage=1 00:07:38.326 --rc genhtml_function_coverage=1 00:07:38.326 --rc genhtml_legend=1 00:07:38.326 --rc geninfo_all_blocks=1 00:07:38.326 --rc geninfo_unexecuted_blocks=1 00:07:38.326 00:07:38.326 ' 00:07:38.326 11:18:19 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:38.326 11:18:19 -- nvmf/common.sh@7 -- # uname -s 00:07:38.326 11:18:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:38.326 11:18:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:38.326 11:18:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:38.326 11:18:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:38.326 11:18:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:38.326 11:18:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:38.326 11:18:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:38.326 11:18:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:38.326 11:18:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:38.326 11:18:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:38.326 11:18:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8e2af7c2-404d-449a-a884-b26bc0fd0f09 00:07:38.326 11:18:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=8e2af7c2-404d-449a-a884-b26bc0fd0f09 00:07:38.326 11:18:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:38.326 11:18:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:38.326 11:18:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:38.326 11:18:19 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:38.326 11:18:19 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:38.326 11:18:19 -- scripts/common.sh@15 -- # shopt -s extglob 00:07:38.326 11:18:19 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:38.326 11:18:19 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:38.326 11:18:19 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:38.326 11:18:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.326 11:18:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.326 11:18:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.326 11:18:19 -- paths/export.sh@5 -- # export PATH 00:07:38.326 11:18:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.326 11:18:19 -- nvmf/common.sh@51 -- # : 0 00:07:38.326 11:18:19 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:38.326 11:18:19 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:38.326 11:18:19 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:38.326 11:18:19 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:38.326 11:18:19 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:38.326 11:18:19 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:38.326 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:38.326 11:18:19 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:38.326 11:18:19 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:38.326 11:18:19 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:38.326 11:18:19 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:07:38.326 11:18:19 -- spdk/autotest.sh@32 -- # uname -s 00:07:38.326 11:18:19 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:07:38.326 11:18:19 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:07:38.326 11:18:19 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:07:38.326 11:18:19 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:07:38.326 11:18:19 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:07:38.326 11:18:19 -- spdk/autotest.sh@44 -- # modprobe nbd 00:07:38.326 11:18:20 -- spdk/autotest.sh@46 -- # type -P udevadm 00:07:38.326 11:18:20 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:07:38.326 11:18:20 -- spdk/autotest.sh@48 -- # udevadm_pid=55228 00:07:38.326 11:18:20 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:07:38.326 11:18:20 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:07:38.326 11:18:20 -- pm/common@17 -- # local monitor 00:07:38.326 11:18:20 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:38.326 11:18:20 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:38.585 11:18:20 -- pm/common@21 -- # date +%s 00:07:38.585 11:18:20 -- pm/common@21 -- # date +%s 00:07:38.585 11:18:20 -- pm/common@25 -- # sleep 1 00:07:38.585 11:18:20 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1728299900 00:07:38.585 11:18:20 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1728299900 00:07:38.585 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1728299900_collect-cpu-load.pm.log 00:07:38.585 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1728299900_collect-vmstat.pm.log 00:07:39.522 11:18:21 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:07:39.522 11:18:21 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:07:39.522 11:18:21 -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:39.522 11:18:21 -- common/autotest_common.sh@10 -- # set +x 00:07:39.522 11:18:21 -- spdk/autotest.sh@59 -- # create_test_list 00:07:39.522 11:18:21 -- common/autotest_common.sh@748 -- # xtrace_disable 00:07:39.522 11:18:21 -- common/autotest_common.sh@10 -- # set +x 00:07:39.522 11:18:21 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:07:39.522 11:18:21 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:07:39.522 11:18:21 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:07:39.522 11:18:21 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:07:39.522 11:18:21 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:07:39.522 11:18:21 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:07:39.522 11:18:21 -- common/autotest_common.sh@1455 -- # uname 00:07:39.522 11:18:21 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:07:39.522 11:18:21 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:07:39.522 11:18:21 -- common/autotest_common.sh@1475 -- # uname 00:07:39.522 11:18:21 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:07:39.522 11:18:21 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:07:39.522 11:18:21 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:07:39.522 lcov: LCOV version 1.15 00:07:39.522 11:18:21 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:07:57.628 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:07:57.628 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:08:12.560 11:18:52 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:08:12.560 11:18:52 -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:12.560 11:18:52 -- common/autotest_common.sh@10 -- # set +x 00:08:12.560 11:18:53 -- spdk/autotest.sh@78 -- # rm -f 00:08:12.560 11:18:53 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:12.560 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:12.818 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:08:12.818 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:08:12.818 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:08:12.818 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:08:12.818 11:18:54 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:08:12.818 11:18:54 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:08:12.818 11:18:54 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:08:12.818 11:18:54 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:08:12.818 11:18:54 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:12.818 11:18:54 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:08:12.818 11:18:54 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:08:12.818 11:18:54 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:12.818 11:18:54 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:12.818 11:18:54 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:12.818 11:18:54 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:08:12.818 11:18:54 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:08:12.818 11:18:54 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:08:12.818 11:18:54 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:12.818 11:18:54 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:12.818 11:18:54 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:08:12.818 11:18:54 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:08:12.818 11:18:54 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:08:12.818 11:18:54 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:12.818 11:18:54 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:12.818 11:18:54 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:08:12.818 11:18:54 -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:08:12.818 11:18:54 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:08:12.818 11:18:54 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:12.818 11:18:54 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:12.818 11:18:54 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:08:12.818 11:18:54 -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:08:12.818 11:18:54 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:08:12.818 11:18:54 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:12.818 11:18:54 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:12.818 11:18:54 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:08:12.818 11:18:54 -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:08:12.818 11:18:54 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:08:12.818 11:18:54 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:12.818 11:18:54 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:12.818 11:18:54 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:08:12.818 11:18:54 -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:08:12.818 11:18:54 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:08:12.818 11:18:54 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:12.818 11:18:54 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:08:12.818 11:18:54 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:12.818 11:18:54 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:12.818 11:18:54 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:08:12.818 11:18:54 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:08:12.818 11:18:54 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:08:13.077 No valid GPT data, bailing 00:08:13.077 11:18:54 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:08:13.077 11:18:54 -- scripts/common.sh@394 -- # pt= 00:08:13.077 11:18:54 -- scripts/common.sh@395 -- # return 1 00:08:13.077 11:18:54 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:08:13.077 1+0 records in 00:08:13.077 1+0 records out 00:08:13.077 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0190743 s, 55.0 MB/s 00:08:13.077 11:18:54 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:13.077 11:18:54 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:13.077 11:18:54 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:08:13.077 11:18:54 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:08:13.077 11:18:54 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:08:13.077 No valid GPT data, bailing 00:08:13.077 11:18:54 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:08:13.077 11:18:54 -- scripts/common.sh@394 -- # pt= 00:08:13.077 11:18:54 -- scripts/common.sh@395 -- # return 1 00:08:13.077 11:18:54 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:08:13.077 1+0 records in 00:08:13.077 1+0 records out 00:08:13.077 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00470409 s, 223 MB/s 00:08:13.077 11:18:54 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:13.077 11:18:54 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:13.077 11:18:54 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:08:13.077 11:18:54 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:08:13.077 11:18:54 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:08:13.077 No valid GPT data, bailing 00:08:13.077 11:18:54 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:08:13.077 11:18:54 -- scripts/common.sh@394 -- # pt= 00:08:13.077 11:18:54 -- scripts/common.sh@395 -- # return 1 00:08:13.077 11:18:54 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:08:13.077 1+0 records in 00:08:13.077 1+0 records out 00:08:13.077 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00649364 s, 161 MB/s 00:08:13.077 11:18:54 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:13.077 11:18:54 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:13.077 11:18:54 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:08:13.077 11:18:54 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:08:13.078 11:18:54 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:08:13.078 No valid GPT data, bailing 00:08:13.336 11:18:54 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:08:13.336 11:18:54 -- scripts/common.sh@394 -- # pt= 00:08:13.336 11:18:54 -- scripts/common.sh@395 -- # return 1 00:08:13.336 11:18:54 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:08:13.336 1+0 records in 00:08:13.336 1+0 records out 00:08:13.336 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00660428 s, 159 MB/s 00:08:13.336 11:18:54 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:13.336 11:18:54 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:13.336 11:18:54 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:08:13.336 11:18:54 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:08:13.336 11:18:54 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:08:13.336 No valid GPT data, bailing 00:08:13.336 11:18:54 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:08:13.336 11:18:54 -- scripts/common.sh@394 -- # pt= 00:08:13.336 11:18:54 -- scripts/common.sh@395 -- # return 1 00:08:13.336 11:18:54 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:08:13.336 1+0 records in 00:08:13.336 1+0 records out 00:08:13.336 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00543028 s, 193 MB/s 00:08:13.336 11:18:54 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:13.336 11:18:54 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:13.336 11:18:54 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:08:13.336 11:18:54 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:08:13.336 11:18:54 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:08:13.336 No valid GPT data, bailing 00:08:13.336 11:18:54 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:08:13.336 11:18:54 -- scripts/common.sh@394 -- # pt= 00:08:13.336 11:18:54 -- scripts/common.sh@395 -- # return 1 00:08:13.336 11:18:54 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:08:13.336 1+0 records in 00:08:13.336 1+0 records out 00:08:13.336 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00639568 s, 164 MB/s 00:08:13.336 11:18:54 -- spdk/autotest.sh@105 -- # sync 00:08:13.336 11:18:55 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:08:13.336 11:18:55 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:08:13.336 11:18:55 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:08:16.619 11:18:57 -- spdk/autotest.sh@111 -- # uname -s 00:08:16.619 11:18:57 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:08:16.619 11:18:57 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:08:16.619 11:18:57 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:08:16.876 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:17.442 Hugepages 00:08:17.442 node hugesize free / total 00:08:17.442 node0 1048576kB 0 / 0 00:08:17.442 node0 2048kB 0 / 0 00:08:17.442 00:08:17.442 Type BDF Vendor Device NUMA Driver Device Block devices 00:08:17.728 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:08:17.728 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:08:17.985 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:08:17.985 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:08:17.985 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:08:17.985 11:18:59 -- spdk/autotest.sh@117 -- # uname -s 00:08:17.985 11:18:59 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:08:17.985 11:18:59 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:08:17.985 11:18:59 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:18.919 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:19.851 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:19.851 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:19.851 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:19.851 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:19.851 11:19:01 -- common/autotest_common.sh@1515 -- # sleep 1 00:08:20.802 11:19:02 -- common/autotest_common.sh@1516 -- # bdfs=() 00:08:20.802 11:19:02 -- common/autotest_common.sh@1516 -- # local bdfs 00:08:20.802 11:19:02 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:08:20.802 11:19:02 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:08:20.802 11:19:02 -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:20.802 11:19:02 -- common/autotest_common.sh@1496 -- # local bdfs 00:08:20.802 11:19:02 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:20.802 11:19:02 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:20.802 11:19:02 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:21.060 11:19:02 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:21.060 11:19:02 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:21.060 11:19:02 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:21.626 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:21.884 Waiting for block devices as requested 00:08:21.884 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:21.884 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:22.142 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:22.400 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:27.726 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:27.726 11:19:08 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:08:27.726 11:19:08 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:08:27.726 11:19:08 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:08:27.726 11:19:08 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:08:27.726 11:19:08 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:08:27.726 11:19:08 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:08:27.726 11:19:08 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:08:27.726 11:19:08 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:08:27.726 11:19:08 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:08:27.726 11:19:08 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:08:27.726 11:19:08 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:08:27.726 11:19:08 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:08:27.726 11:19:08 -- common/autotest_common.sh@1529 -- # grep oacs 00:08:27.726 11:19:09 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:08:27.726 11:19:09 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:08:27.726 11:19:09 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:08:27.726 11:19:09 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:08:27.726 11:19:09 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:08:27.726 11:19:09 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:08:27.726 11:19:09 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:08:27.726 11:19:09 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:08:27.726 11:19:09 -- common/autotest_common.sh@1541 -- # continue 00:08:27.726 11:19:09 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:08:27.726 11:19:09 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:08:27.726 11:19:09 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:08:27.726 11:19:09 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:08:27.726 11:19:09 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:08:27.726 11:19:09 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:08:27.726 11:19:09 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:08:27.726 11:19:09 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:08:27.726 11:19:09 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:08:27.726 11:19:09 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:08:27.726 11:19:09 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:08:27.726 11:19:09 -- common/autotest_common.sh@1529 -- # grep oacs 00:08:27.726 11:19:09 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:08:27.726 11:19:09 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:08:27.726 11:19:09 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:08:27.726 11:19:09 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:08:27.726 11:19:09 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:08:27.726 11:19:09 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:08:27.726 11:19:09 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:08:27.726 11:19:09 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:08:27.726 11:19:09 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:08:27.726 11:19:09 -- common/autotest_common.sh@1541 -- # continue 00:08:27.726 11:19:09 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:08:27.726 11:19:09 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:08:27.726 11:19:09 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:08:27.726 11:19:09 -- common/autotest_common.sh@1485 -- # grep 0000:00:12.0/nvme/nvme 00:08:27.726 11:19:09 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:08:27.726 11:19:09 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:08:27.726 11:19:09 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:08:27.726 11:19:09 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme2 00:08:27.726 11:19:09 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme2 00:08:27.726 11:19:09 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme2 ]] 00:08:27.726 11:19:09 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme2 00:08:27.726 11:19:09 -- common/autotest_common.sh@1529 -- # grep oacs 00:08:27.726 11:19:09 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:08:27.726 11:19:09 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:08:27.726 11:19:09 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:08:27.726 11:19:09 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:08:27.726 11:19:09 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme2 00:08:27.726 11:19:09 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:08:27.726 11:19:09 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:08:27.726 11:19:09 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:08:27.726 11:19:09 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:08:27.726 11:19:09 -- common/autotest_common.sh@1541 -- # continue 00:08:27.726 11:19:09 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:08:27.726 11:19:09 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:08:27.726 11:19:09 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:08:27.726 11:19:09 -- common/autotest_common.sh@1485 -- # grep 0000:00:13.0/nvme/nvme 00:08:27.726 11:19:09 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:08:27.726 11:19:09 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:08:27.726 11:19:09 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:08:27.726 11:19:09 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme3 00:08:27.726 11:19:09 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme3 00:08:27.726 11:19:09 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme3 ]] 00:08:27.726 11:19:09 -- common/autotest_common.sh@1529 -- # grep oacs 00:08:27.726 11:19:09 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:08:27.726 11:19:09 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme3 00:08:27.726 11:19:09 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:08:27.726 11:19:09 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:08:27.726 11:19:09 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:08:27.726 11:19:09 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme3 00:08:27.726 11:19:09 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:08:27.726 11:19:09 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:08:27.726 11:19:09 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:08:27.726 11:19:09 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:08:27.726 11:19:09 -- common/autotest_common.sh@1541 -- # continue 00:08:27.726 11:19:09 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:08:27.726 11:19:09 -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:27.726 11:19:09 -- common/autotest_common.sh@10 -- # set +x 00:08:27.726 11:19:09 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:08:27.726 11:19:09 -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:27.726 11:19:09 -- common/autotest_common.sh@10 -- # set +x 00:08:27.726 11:19:09 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:28.296 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:29.240 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:29.241 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:29.241 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:29.241 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:29.241 11:19:10 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:08:29.241 11:19:10 -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:29.241 11:19:10 -- common/autotest_common.sh@10 -- # set +x 00:08:29.241 11:19:10 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:08:29.241 11:19:10 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:08:29.241 11:19:10 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:08:29.241 11:19:10 -- common/autotest_common.sh@1561 -- # bdfs=() 00:08:29.241 11:19:10 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:08:29.241 11:19:10 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:08:29.241 11:19:10 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:08:29.241 11:19:10 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:08:29.241 11:19:10 -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:29.241 11:19:10 -- common/autotest_common.sh@1496 -- # local bdfs 00:08:29.241 11:19:10 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:29.241 11:19:10 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:29.241 11:19:10 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:29.500 11:19:11 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:29.500 11:19:11 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:29.500 11:19:11 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:08:29.500 11:19:11 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:08:29.500 11:19:11 -- common/autotest_common.sh@1564 -- # device=0x0010 00:08:29.500 11:19:11 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:08:29.500 11:19:11 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:08:29.500 11:19:11 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:08:29.500 11:19:11 -- common/autotest_common.sh@1564 -- # device=0x0010 00:08:29.500 11:19:11 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:08:29.500 11:19:11 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:08:29.500 11:19:11 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:08:29.500 11:19:11 -- common/autotest_common.sh@1564 -- # device=0x0010 00:08:29.500 11:19:11 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:08:29.500 11:19:11 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:08:29.500 11:19:11 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:08:29.500 11:19:11 -- common/autotest_common.sh@1564 -- # device=0x0010 00:08:29.500 11:19:11 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:08:29.500 11:19:11 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:08:29.500 11:19:11 -- common/autotest_common.sh@1570 -- # return 0 00:08:29.500 11:19:11 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:08:29.500 11:19:11 -- common/autotest_common.sh@1578 -- # return 0 00:08:29.500 11:19:11 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:08:29.500 11:19:11 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:08:29.500 11:19:11 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:08:29.500 11:19:11 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:08:29.500 11:19:11 -- spdk/autotest.sh@149 -- # timing_enter lib 00:08:29.500 11:19:11 -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:29.500 11:19:11 -- common/autotest_common.sh@10 -- # set +x 00:08:29.500 11:19:11 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:08:29.500 11:19:11 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:08:29.500 11:19:11 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:29.500 11:19:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:29.500 11:19:11 -- common/autotest_common.sh@10 -- # set +x 00:08:29.500 ************************************ 00:08:29.500 START TEST env 00:08:29.500 ************************************ 00:08:29.500 11:19:11 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:08:29.500 * Looking for test storage... 00:08:29.500 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:08:29.500 11:19:11 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:29.500 11:19:11 env -- common/autotest_common.sh@1681 -- # lcov --version 00:08:29.500 11:19:11 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:29.758 11:19:11 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:29.758 11:19:11 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:29.758 11:19:11 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:29.758 11:19:11 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:29.758 11:19:11 env -- scripts/common.sh@336 -- # IFS=.-: 00:08:29.758 11:19:11 env -- scripts/common.sh@336 -- # read -ra ver1 00:08:29.758 11:19:11 env -- scripts/common.sh@337 -- # IFS=.-: 00:08:29.758 11:19:11 env -- scripts/common.sh@337 -- # read -ra ver2 00:08:29.758 11:19:11 env -- scripts/common.sh@338 -- # local 'op=<' 00:08:29.758 11:19:11 env -- scripts/common.sh@340 -- # ver1_l=2 00:08:29.758 11:19:11 env -- scripts/common.sh@341 -- # ver2_l=1 00:08:29.758 11:19:11 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:29.758 11:19:11 env -- scripts/common.sh@344 -- # case "$op" in 00:08:29.758 11:19:11 env -- scripts/common.sh@345 -- # : 1 00:08:29.758 11:19:11 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:29.758 11:19:11 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:29.758 11:19:11 env -- scripts/common.sh@365 -- # decimal 1 00:08:29.758 11:19:11 env -- scripts/common.sh@353 -- # local d=1 00:08:29.758 11:19:11 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:29.758 11:19:11 env -- scripts/common.sh@355 -- # echo 1 00:08:29.758 11:19:11 env -- scripts/common.sh@365 -- # ver1[v]=1 00:08:29.758 11:19:11 env -- scripts/common.sh@366 -- # decimal 2 00:08:29.758 11:19:11 env -- scripts/common.sh@353 -- # local d=2 00:08:29.758 11:19:11 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:29.758 11:19:11 env -- scripts/common.sh@355 -- # echo 2 00:08:29.758 11:19:11 env -- scripts/common.sh@366 -- # ver2[v]=2 00:08:29.758 11:19:11 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:29.758 11:19:11 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:29.758 11:19:11 env -- scripts/common.sh@368 -- # return 0 00:08:29.758 11:19:11 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:29.758 11:19:11 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:29.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.759 --rc genhtml_branch_coverage=1 00:08:29.759 --rc genhtml_function_coverage=1 00:08:29.759 --rc genhtml_legend=1 00:08:29.759 --rc geninfo_all_blocks=1 00:08:29.759 --rc geninfo_unexecuted_blocks=1 00:08:29.759 00:08:29.759 ' 00:08:29.759 11:19:11 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:29.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.759 --rc genhtml_branch_coverage=1 00:08:29.759 --rc genhtml_function_coverage=1 00:08:29.759 --rc genhtml_legend=1 00:08:29.759 --rc geninfo_all_blocks=1 00:08:29.759 --rc geninfo_unexecuted_blocks=1 00:08:29.759 00:08:29.759 ' 00:08:29.759 11:19:11 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:29.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.759 --rc genhtml_branch_coverage=1 00:08:29.759 --rc genhtml_function_coverage=1 00:08:29.759 --rc genhtml_legend=1 00:08:29.759 --rc geninfo_all_blocks=1 00:08:29.759 --rc geninfo_unexecuted_blocks=1 00:08:29.759 00:08:29.759 ' 00:08:29.759 11:19:11 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:29.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.759 --rc genhtml_branch_coverage=1 00:08:29.759 --rc genhtml_function_coverage=1 00:08:29.759 --rc genhtml_legend=1 00:08:29.759 --rc geninfo_all_blocks=1 00:08:29.759 --rc geninfo_unexecuted_blocks=1 00:08:29.759 00:08:29.759 ' 00:08:29.759 11:19:11 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:08:29.759 11:19:11 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:29.759 11:19:11 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:29.759 11:19:11 env -- common/autotest_common.sh@10 -- # set +x 00:08:29.759 ************************************ 00:08:29.759 START TEST env_memory 00:08:29.759 ************************************ 00:08:29.759 11:19:11 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:08:29.759 00:08:29.759 00:08:29.759 CUnit - A unit testing framework for C - Version 2.1-3 00:08:29.759 http://cunit.sourceforge.net/ 00:08:29.759 00:08:29.759 00:08:29.759 Suite: memory 00:08:29.759 Test: alloc and free memory map ...[2024-10-07 11:19:11.403617] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:08:29.759 passed 00:08:30.017 Test: mem map translation ...[2024-10-07 11:19:11.473387] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:08:30.017 [2024-10-07 11:19:11.473496] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:08:30.017 [2024-10-07 11:19:11.473575] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:08:30.017 [2024-10-07 11:19:11.473604] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:08:30.017 passed 00:08:30.017 Test: mem map registration ...[2024-10-07 11:19:11.548596] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:08:30.017 [2024-10-07 11:19:11.548682] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:08:30.017 passed 00:08:30.017 Test: mem map adjacent registrations ...passed 00:08:30.017 00:08:30.017 Run Summary: Type Total Ran Passed Failed Inactive 00:08:30.017 suites 1 1 n/a 0 0 00:08:30.017 tests 4 4 4 0 0 00:08:30.017 asserts 152 152 152 0 n/a 00:08:30.017 00:08:30.017 Elapsed time = 0.304 seconds 00:08:30.017 00:08:30.017 real 0m0.348s 00:08:30.017 user 0m0.309s 00:08:30.017 sys 0m0.030s 00:08:30.017 11:19:11 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:30.017 11:19:11 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:08:30.017 ************************************ 00:08:30.017 END TEST env_memory 00:08:30.017 ************************************ 00:08:30.017 11:19:11 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:08:30.017 11:19:11 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:30.017 11:19:11 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:30.017 11:19:11 env -- common/autotest_common.sh@10 -- # set +x 00:08:30.275 ************************************ 00:08:30.275 START TEST env_vtophys 00:08:30.275 ************************************ 00:08:30.275 11:19:11 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:08:30.275 EAL: lib.eal log level changed from notice to debug 00:08:30.275 EAL: Detected lcore 0 as core 0 on socket 0 00:08:30.275 EAL: Detected lcore 1 as core 0 on socket 0 00:08:30.275 EAL: Detected lcore 2 as core 0 on socket 0 00:08:30.275 EAL: Detected lcore 3 as core 0 on socket 0 00:08:30.275 EAL: Detected lcore 4 as core 0 on socket 0 00:08:30.275 EAL: Detected lcore 5 as core 0 on socket 0 00:08:30.275 EAL: Detected lcore 6 as core 0 on socket 0 00:08:30.275 EAL: Detected lcore 7 as core 0 on socket 0 00:08:30.275 EAL: Detected lcore 8 as core 0 on socket 0 00:08:30.275 EAL: Detected lcore 9 as core 0 on socket 0 00:08:30.275 EAL: Maximum logical cores by configuration: 128 00:08:30.275 EAL: Detected CPU lcores: 10 00:08:30.275 EAL: Detected NUMA nodes: 1 00:08:30.275 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:08:30.275 EAL: Detected shared linkage of DPDK 00:08:30.275 EAL: No shared files mode enabled, IPC will be disabled 00:08:30.275 EAL: Selected IOVA mode 'PA' 00:08:30.275 EAL: Probing VFIO support... 00:08:30.275 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:08:30.275 EAL: VFIO modules not loaded, skipping VFIO support... 00:08:30.275 EAL: Ask a virtual area of 0x2e000 bytes 00:08:30.275 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:08:30.275 EAL: Setting up physically contiguous memory... 00:08:30.275 EAL: Setting maximum number of open files to 524288 00:08:30.275 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:08:30.275 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:08:30.275 EAL: Ask a virtual area of 0x61000 bytes 00:08:30.275 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:08:30.275 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:30.275 EAL: Ask a virtual area of 0x400000000 bytes 00:08:30.275 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:08:30.275 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:08:30.275 EAL: Ask a virtual area of 0x61000 bytes 00:08:30.275 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:08:30.275 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:30.275 EAL: Ask a virtual area of 0x400000000 bytes 00:08:30.275 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:08:30.275 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:08:30.275 EAL: Ask a virtual area of 0x61000 bytes 00:08:30.275 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:08:30.275 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:30.275 EAL: Ask a virtual area of 0x400000000 bytes 00:08:30.275 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:08:30.275 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:08:30.275 EAL: Ask a virtual area of 0x61000 bytes 00:08:30.275 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:08:30.275 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:30.275 EAL: Ask a virtual area of 0x400000000 bytes 00:08:30.275 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:08:30.275 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:08:30.275 EAL: Hugepages will be freed exactly as allocated. 00:08:30.275 EAL: No shared files mode enabled, IPC is disabled 00:08:30.275 EAL: No shared files mode enabled, IPC is disabled 00:08:30.275 EAL: TSC frequency is ~2490000 KHz 00:08:30.275 EAL: Main lcore 0 is ready (tid=7f228ec15a40;cpuset=[0]) 00:08:30.275 EAL: Trying to obtain current memory policy. 00:08:30.275 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:30.275 EAL: Restoring previous memory policy: 0 00:08:30.275 EAL: request: mp_malloc_sync 00:08:30.275 EAL: No shared files mode enabled, IPC is disabled 00:08:30.275 EAL: Heap on socket 0 was expanded by 2MB 00:08:30.275 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:08:30.275 EAL: No PCI address specified using 'addr=' in: bus=pci 00:08:30.275 EAL: Mem event callback 'spdk:(nil)' registered 00:08:30.275 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:08:30.534 00:08:30.534 00:08:30.534 CUnit - A unit testing framework for C - Version 2.1-3 00:08:30.534 http://cunit.sourceforge.net/ 00:08:30.534 00:08:30.534 00:08:30.534 Suite: components_suite 00:08:31.100 Test: vtophys_malloc_test ...passed 00:08:31.100 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:08:31.100 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:31.100 EAL: Restoring previous memory policy: 4 00:08:31.100 EAL: Calling mem event callback 'spdk:(nil)' 00:08:31.100 EAL: request: mp_malloc_sync 00:08:31.100 EAL: No shared files mode enabled, IPC is disabled 00:08:31.100 EAL: Heap on socket 0 was expanded by 4MB 00:08:31.100 EAL: Calling mem event callback 'spdk:(nil)' 00:08:31.100 EAL: request: mp_malloc_sync 00:08:31.100 EAL: No shared files mode enabled, IPC is disabled 00:08:31.100 EAL: Heap on socket 0 was shrunk by 4MB 00:08:31.100 EAL: Trying to obtain current memory policy. 00:08:31.100 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:31.100 EAL: Restoring previous memory policy: 4 00:08:31.100 EAL: Calling mem event callback 'spdk:(nil)' 00:08:31.100 EAL: request: mp_malloc_sync 00:08:31.100 EAL: No shared files mode enabled, IPC is disabled 00:08:31.100 EAL: Heap on socket 0 was expanded by 6MB 00:08:31.100 EAL: Calling mem event callback 'spdk:(nil)' 00:08:31.100 EAL: request: mp_malloc_sync 00:08:31.100 EAL: No shared files mode enabled, IPC is disabled 00:08:31.100 EAL: Heap on socket 0 was shrunk by 6MB 00:08:31.100 EAL: Trying to obtain current memory policy. 00:08:31.100 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:31.100 EAL: Restoring previous memory policy: 4 00:08:31.100 EAL: Calling mem event callback 'spdk:(nil)' 00:08:31.100 EAL: request: mp_malloc_sync 00:08:31.100 EAL: No shared files mode enabled, IPC is disabled 00:08:31.100 EAL: Heap on socket 0 was expanded by 10MB 00:08:31.100 EAL: Calling mem event callback 'spdk:(nil)' 00:08:31.100 EAL: request: mp_malloc_sync 00:08:31.100 EAL: No shared files mode enabled, IPC is disabled 00:08:31.100 EAL: Heap on socket 0 was shrunk by 10MB 00:08:31.100 EAL: Trying to obtain current memory policy. 00:08:31.100 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:31.100 EAL: Restoring previous memory policy: 4 00:08:31.100 EAL: Calling mem event callback 'spdk:(nil)' 00:08:31.100 EAL: request: mp_malloc_sync 00:08:31.100 EAL: No shared files mode enabled, IPC is disabled 00:08:31.101 EAL: Heap on socket 0 was expanded by 18MB 00:08:31.101 EAL: Calling mem event callback 'spdk:(nil)' 00:08:31.101 EAL: request: mp_malloc_sync 00:08:31.101 EAL: No shared files mode enabled, IPC is disabled 00:08:31.101 EAL: Heap on socket 0 was shrunk by 18MB 00:08:31.101 EAL: Trying to obtain current memory policy. 00:08:31.101 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:31.101 EAL: Restoring previous memory policy: 4 00:08:31.101 EAL: Calling mem event callback 'spdk:(nil)' 00:08:31.101 EAL: request: mp_malloc_sync 00:08:31.101 EAL: No shared files mode enabled, IPC is disabled 00:08:31.101 EAL: Heap on socket 0 was expanded by 34MB 00:08:31.359 EAL: Calling mem event callback 'spdk:(nil)' 00:08:31.359 EAL: request: mp_malloc_sync 00:08:31.359 EAL: No shared files mode enabled, IPC is disabled 00:08:31.359 EAL: Heap on socket 0 was shrunk by 34MB 00:08:31.359 EAL: Trying to obtain current memory policy. 00:08:31.359 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:31.359 EAL: Restoring previous memory policy: 4 00:08:31.359 EAL: Calling mem event callback 'spdk:(nil)' 00:08:31.359 EAL: request: mp_malloc_sync 00:08:31.359 EAL: No shared files mode enabled, IPC is disabled 00:08:31.359 EAL: Heap on socket 0 was expanded by 66MB 00:08:31.359 EAL: Calling mem event callback 'spdk:(nil)' 00:08:31.359 EAL: request: mp_malloc_sync 00:08:31.359 EAL: No shared files mode enabled, IPC is disabled 00:08:31.359 EAL: Heap on socket 0 was shrunk by 66MB 00:08:31.618 EAL: Trying to obtain current memory policy. 00:08:31.618 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:31.618 EAL: Restoring previous memory policy: 4 00:08:31.618 EAL: Calling mem event callback 'spdk:(nil)' 00:08:31.618 EAL: request: mp_malloc_sync 00:08:31.618 EAL: No shared files mode enabled, IPC is disabled 00:08:31.618 EAL: Heap on socket 0 was expanded by 130MB 00:08:31.876 EAL: Calling mem event callback 'spdk:(nil)' 00:08:31.876 EAL: request: mp_malloc_sync 00:08:31.876 EAL: No shared files mode enabled, IPC is disabled 00:08:31.876 EAL: Heap on socket 0 was shrunk by 130MB 00:08:32.134 EAL: Trying to obtain current memory policy. 00:08:32.134 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:32.134 EAL: Restoring previous memory policy: 4 00:08:32.134 EAL: Calling mem event callback 'spdk:(nil)' 00:08:32.134 EAL: request: mp_malloc_sync 00:08:32.134 EAL: No shared files mode enabled, IPC is disabled 00:08:32.134 EAL: Heap on socket 0 was expanded by 258MB 00:08:32.701 EAL: Calling mem event callback 'spdk:(nil)' 00:08:32.701 EAL: request: mp_malloc_sync 00:08:32.701 EAL: No shared files mode enabled, IPC is disabled 00:08:32.701 EAL: Heap on socket 0 was shrunk by 258MB 00:08:33.268 EAL: Trying to obtain current memory policy. 00:08:33.268 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:33.525 EAL: Restoring previous memory policy: 4 00:08:33.525 EAL: Calling mem event callback 'spdk:(nil)' 00:08:33.525 EAL: request: mp_malloc_sync 00:08:33.525 EAL: No shared files mode enabled, IPC is disabled 00:08:33.525 EAL: Heap on socket 0 was expanded by 514MB 00:08:34.460 EAL: Calling mem event callback 'spdk:(nil)' 00:08:34.717 EAL: request: mp_malloc_sync 00:08:34.717 EAL: No shared files mode enabled, IPC is disabled 00:08:34.717 EAL: Heap on socket 0 was shrunk by 514MB 00:08:35.652 EAL: Trying to obtain current memory policy. 00:08:35.652 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:35.915 EAL: Restoring previous memory policy: 4 00:08:35.915 EAL: Calling mem event callback 'spdk:(nil)' 00:08:35.915 EAL: request: mp_malloc_sync 00:08:35.915 EAL: No shared files mode enabled, IPC is disabled 00:08:35.915 EAL: Heap on socket 0 was expanded by 1026MB 00:08:38.441 EAL: Calling mem event callback 'spdk:(nil)' 00:08:38.441 EAL: request: mp_malloc_sync 00:08:38.441 EAL: No shared files mode enabled, IPC is disabled 00:08:38.441 EAL: Heap on socket 0 was shrunk by 1026MB 00:08:40.344 passed 00:08:40.344 00:08:40.344 Run Summary: Type Total Ran Passed Failed Inactive 00:08:40.344 suites 1 1 n/a 0 0 00:08:40.344 tests 2 2 2 0 0 00:08:40.344 asserts 5670 5670 5670 0 n/a 00:08:40.344 00:08:40.344 Elapsed time = 9.558 seconds 00:08:40.344 EAL: Calling mem event callback 'spdk:(nil)' 00:08:40.344 EAL: request: mp_malloc_sync 00:08:40.344 EAL: No shared files mode enabled, IPC is disabled 00:08:40.344 EAL: Heap on socket 0 was shrunk by 2MB 00:08:40.344 EAL: No shared files mode enabled, IPC is disabled 00:08:40.344 EAL: No shared files mode enabled, IPC is disabled 00:08:40.344 EAL: No shared files mode enabled, IPC is disabled 00:08:40.344 00:08:40.344 real 0m9.941s 00:08:40.344 user 0m8.372s 00:08:40.344 sys 0m1.388s 00:08:40.344 11:19:21 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:40.344 11:19:21 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:08:40.344 ************************************ 00:08:40.344 END TEST env_vtophys 00:08:40.344 ************************************ 00:08:40.344 11:19:21 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:08:40.344 11:19:21 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:40.344 11:19:21 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:40.344 11:19:21 env -- common/autotest_common.sh@10 -- # set +x 00:08:40.344 ************************************ 00:08:40.344 START TEST env_pci 00:08:40.344 ************************************ 00:08:40.344 11:19:21 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:08:40.344 00:08:40.344 00:08:40.344 CUnit - A unit testing framework for C - Version 2.1-3 00:08:40.344 http://cunit.sourceforge.net/ 00:08:40.344 00:08:40.344 00:08:40.344 Suite: pci 00:08:40.344 Test: pci_hook ...[2024-10-07 11:19:21.794284] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58119 has claimed it 00:08:40.344 EAL: Cannot find device (10000:00:01.0) 00:08:40.344 EAL: Failed to attach device on primary process 00:08:40.344 passed 00:08:40.344 00:08:40.344 Run Summary: Type Total Ran Passed Failed Inactive 00:08:40.344 suites 1 1 n/a 0 0 00:08:40.344 tests 1 1 1 0 0 00:08:40.344 asserts 25 25 25 0 n/a 00:08:40.344 00:08:40.344 Elapsed time = 0.009 seconds 00:08:40.344 00:08:40.344 real 0m0.104s 00:08:40.344 user 0m0.046s 00:08:40.344 sys 0m0.057s 00:08:40.344 11:19:21 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:40.344 ************************************ 00:08:40.344 END TEST env_pci 00:08:40.344 11:19:21 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:08:40.344 ************************************ 00:08:40.344 11:19:21 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:08:40.344 11:19:21 env -- env/env.sh@15 -- # uname 00:08:40.344 11:19:21 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:08:40.344 11:19:21 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:08:40.344 11:19:21 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:40.344 11:19:21 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:40.344 11:19:21 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:40.344 11:19:21 env -- common/autotest_common.sh@10 -- # set +x 00:08:40.344 ************************************ 00:08:40.344 START TEST env_dpdk_post_init 00:08:40.344 ************************************ 00:08:40.344 11:19:21 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:40.344 EAL: Detected CPU lcores: 10 00:08:40.344 EAL: Detected NUMA nodes: 1 00:08:40.344 EAL: Detected shared linkage of DPDK 00:08:40.344 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:40.344 EAL: Selected IOVA mode 'PA' 00:08:40.602 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:40.602 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:08:40.602 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:08:40.602 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:08:40.602 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:08:40.602 Starting DPDK initialization... 00:08:40.602 Starting SPDK post initialization... 00:08:40.602 SPDK NVMe probe 00:08:40.602 Attaching to 0000:00:10.0 00:08:40.602 Attaching to 0000:00:11.0 00:08:40.602 Attaching to 0000:00:12.0 00:08:40.602 Attaching to 0000:00:13.0 00:08:40.602 Attached to 0000:00:10.0 00:08:40.602 Attached to 0000:00:11.0 00:08:40.602 Attached to 0000:00:13.0 00:08:40.602 Attached to 0000:00:12.0 00:08:40.602 Cleaning up... 00:08:40.602 00:08:40.602 real 0m0.324s 00:08:40.602 user 0m0.120s 00:08:40.602 sys 0m0.107s 00:08:40.602 11:19:22 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:40.602 ************************************ 00:08:40.602 END TEST env_dpdk_post_init 00:08:40.602 ************************************ 00:08:40.602 11:19:22 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:08:40.863 11:19:22 env -- env/env.sh@26 -- # uname 00:08:40.863 11:19:22 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:08:40.863 11:19:22 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:08:40.863 11:19:22 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:40.863 11:19:22 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:40.863 11:19:22 env -- common/autotest_common.sh@10 -- # set +x 00:08:40.863 ************************************ 00:08:40.863 START TEST env_mem_callbacks 00:08:40.863 ************************************ 00:08:40.863 11:19:22 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:08:40.863 EAL: Detected CPU lcores: 10 00:08:40.863 EAL: Detected NUMA nodes: 1 00:08:40.863 EAL: Detected shared linkage of DPDK 00:08:40.863 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:40.863 EAL: Selected IOVA mode 'PA' 00:08:40.863 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:40.863 00:08:40.863 00:08:40.863 CUnit - A unit testing framework for C - Version 2.1-3 00:08:40.863 http://cunit.sourceforge.net/ 00:08:40.863 00:08:40.863 00:08:40.863 Suite: memory 00:08:40.863 Test: test ... 00:08:40.863 register 0x200000200000 2097152 00:08:40.863 malloc 3145728 00:08:40.863 register 0x200000400000 4194304 00:08:40.863 buf 0x2000004fffc0 len 3145728 PASSED 00:08:40.863 malloc 64 00:08:40.863 buf 0x2000004ffec0 len 64 PASSED 00:08:40.863 malloc 4194304 00:08:40.863 register 0x200000800000 6291456 00:08:40.863 buf 0x2000009fffc0 len 4194304 PASSED 00:08:40.863 free 0x2000004fffc0 3145728 00:08:40.863 free 0x2000004ffec0 64 00:08:40.863 unregister 0x200000400000 4194304 PASSED 00:08:40.863 free 0x2000009fffc0 4194304 00:08:41.122 unregister 0x200000800000 6291456 PASSED 00:08:41.122 malloc 8388608 00:08:41.122 register 0x200000400000 10485760 00:08:41.122 buf 0x2000005fffc0 len 8388608 PASSED 00:08:41.122 free 0x2000005fffc0 8388608 00:08:41.122 unregister 0x200000400000 10485760 PASSED 00:08:41.122 passed 00:08:41.122 00:08:41.122 Run Summary: Type Total Ran Passed Failed Inactive 00:08:41.122 suites 1 1 n/a 0 0 00:08:41.123 tests 1 1 1 0 0 00:08:41.123 asserts 15 15 15 0 n/a 00:08:41.123 00:08:41.123 Elapsed time = 0.091 seconds 00:08:41.123 00:08:41.123 real 0m0.304s 00:08:41.123 user 0m0.126s 00:08:41.123 sys 0m0.077s 00:08:41.123 11:19:22 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:41.123 11:19:22 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:08:41.123 ************************************ 00:08:41.123 END TEST env_mem_callbacks 00:08:41.123 ************************************ 00:08:41.123 00:08:41.123 real 0m11.636s 00:08:41.123 user 0m9.235s 00:08:41.123 sys 0m2.030s 00:08:41.123 11:19:22 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:41.123 11:19:22 env -- common/autotest_common.sh@10 -- # set +x 00:08:41.123 ************************************ 00:08:41.123 END TEST env 00:08:41.123 ************************************ 00:08:41.123 11:19:22 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:08:41.123 11:19:22 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:41.123 11:19:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:41.123 11:19:22 -- common/autotest_common.sh@10 -- # set +x 00:08:41.123 ************************************ 00:08:41.123 START TEST rpc 00:08:41.123 ************************************ 00:08:41.123 11:19:22 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:08:41.384 * Looking for test storage... 00:08:41.384 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:08:41.384 11:19:22 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:41.384 11:19:22 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:08:41.384 11:19:22 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:41.384 11:19:23 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:41.384 11:19:23 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:41.384 11:19:23 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:41.384 11:19:23 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:41.384 11:19:23 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:41.384 11:19:23 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:41.384 11:19:23 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:41.384 11:19:23 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:41.384 11:19:23 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:41.384 11:19:23 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:41.384 11:19:23 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:41.384 11:19:23 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:41.384 11:19:23 rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:41.384 11:19:23 rpc -- scripts/common.sh@345 -- # : 1 00:08:41.384 11:19:23 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:41.384 11:19:23 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:41.384 11:19:23 rpc -- scripts/common.sh@365 -- # decimal 1 00:08:41.384 11:19:23 rpc -- scripts/common.sh@353 -- # local d=1 00:08:41.384 11:19:23 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:41.384 11:19:23 rpc -- scripts/common.sh@355 -- # echo 1 00:08:41.384 11:19:23 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:41.384 11:19:23 rpc -- scripts/common.sh@366 -- # decimal 2 00:08:41.384 11:19:23 rpc -- scripts/common.sh@353 -- # local d=2 00:08:41.384 11:19:23 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:41.384 11:19:23 rpc -- scripts/common.sh@355 -- # echo 2 00:08:41.384 11:19:23 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:41.384 11:19:23 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:41.384 11:19:23 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:41.384 11:19:23 rpc -- scripts/common.sh@368 -- # return 0 00:08:41.384 11:19:23 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:41.384 11:19:23 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:41.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.384 --rc genhtml_branch_coverage=1 00:08:41.384 --rc genhtml_function_coverage=1 00:08:41.384 --rc genhtml_legend=1 00:08:41.384 --rc geninfo_all_blocks=1 00:08:41.385 --rc geninfo_unexecuted_blocks=1 00:08:41.385 00:08:41.385 ' 00:08:41.385 11:19:23 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:41.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.385 --rc genhtml_branch_coverage=1 00:08:41.385 --rc genhtml_function_coverage=1 00:08:41.385 --rc genhtml_legend=1 00:08:41.385 --rc geninfo_all_blocks=1 00:08:41.385 --rc geninfo_unexecuted_blocks=1 00:08:41.385 00:08:41.385 ' 00:08:41.385 11:19:23 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:41.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.385 --rc genhtml_branch_coverage=1 00:08:41.385 --rc genhtml_function_coverage=1 00:08:41.385 --rc genhtml_legend=1 00:08:41.385 --rc geninfo_all_blocks=1 00:08:41.385 --rc geninfo_unexecuted_blocks=1 00:08:41.385 00:08:41.385 ' 00:08:41.385 11:19:23 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:41.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.385 --rc genhtml_branch_coverage=1 00:08:41.385 --rc genhtml_function_coverage=1 00:08:41.385 --rc genhtml_legend=1 00:08:41.385 --rc geninfo_all_blocks=1 00:08:41.385 --rc geninfo_unexecuted_blocks=1 00:08:41.385 00:08:41.385 ' 00:08:41.385 11:19:23 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58251 00:08:41.385 11:19:23 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:08:41.385 11:19:23 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:41.385 11:19:23 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58251 00:08:41.385 11:19:23 rpc -- common/autotest_common.sh@831 -- # '[' -z 58251 ']' 00:08:41.385 11:19:23 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:41.385 11:19:23 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:41.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:41.385 11:19:23 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:41.385 11:19:23 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:41.385 11:19:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:41.644 [2024-10-07 11:19:23.162661] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:08:41.644 [2024-10-07 11:19:23.162835] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58251 ] 00:08:41.644 [2024-10-07 11:19:23.341731] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.904 [2024-10-07 11:19:23.572089] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:08:41.904 [2024-10-07 11:19:23.572158] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58251' to capture a snapshot of events at runtime. 00:08:41.904 [2024-10-07 11:19:23.572173] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:41.904 [2024-10-07 11:19:23.572189] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:41.904 [2024-10-07 11:19:23.572212] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58251 for offline analysis/debug. 00:08:41.904 [2024-10-07 11:19:23.573544] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.838 11:19:24 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:42.838 11:19:24 rpc -- common/autotest_common.sh@864 -- # return 0 00:08:42.838 11:19:24 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:08:42.838 11:19:24 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:08:42.838 11:19:24 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:08:42.838 11:19:24 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:08:42.838 11:19:24 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:42.838 11:19:24 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:42.838 11:19:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:42.838 ************************************ 00:08:42.838 START TEST rpc_integrity 00:08:42.838 ************************************ 00:08:42.838 11:19:24 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:08:42.838 11:19:24 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:42.838 11:19:24 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.838 11:19:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:43.097 11:19:24 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.097 11:19:24 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:43.097 11:19:24 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:08:43.097 11:19:24 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:43.097 11:19:24 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:43.097 11:19:24 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.097 11:19:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:43.097 11:19:24 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.097 11:19:24 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:08:43.097 11:19:24 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:43.097 11:19:24 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.097 11:19:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:43.097 11:19:24 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.097 11:19:24 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:43.097 { 00:08:43.097 "name": "Malloc0", 00:08:43.097 "aliases": [ 00:08:43.097 "110e5b33-4efb-49fb-9e6b-75ac9c601cb9" 00:08:43.097 ], 00:08:43.097 "product_name": "Malloc disk", 00:08:43.097 "block_size": 512, 00:08:43.097 "num_blocks": 16384, 00:08:43.097 "uuid": "110e5b33-4efb-49fb-9e6b-75ac9c601cb9", 00:08:43.097 "assigned_rate_limits": { 00:08:43.097 "rw_ios_per_sec": 0, 00:08:43.097 "rw_mbytes_per_sec": 0, 00:08:43.097 "r_mbytes_per_sec": 0, 00:08:43.097 "w_mbytes_per_sec": 0 00:08:43.097 }, 00:08:43.097 "claimed": false, 00:08:43.097 "zoned": false, 00:08:43.097 "supported_io_types": { 00:08:43.097 "read": true, 00:08:43.097 "write": true, 00:08:43.097 "unmap": true, 00:08:43.097 "flush": true, 00:08:43.097 "reset": true, 00:08:43.097 "nvme_admin": false, 00:08:43.097 "nvme_io": false, 00:08:43.097 "nvme_io_md": false, 00:08:43.097 "write_zeroes": true, 00:08:43.097 "zcopy": true, 00:08:43.097 "get_zone_info": false, 00:08:43.097 "zone_management": false, 00:08:43.097 "zone_append": false, 00:08:43.097 "compare": false, 00:08:43.097 "compare_and_write": false, 00:08:43.097 "abort": true, 00:08:43.097 "seek_hole": false, 00:08:43.097 "seek_data": false, 00:08:43.097 "copy": true, 00:08:43.097 "nvme_iov_md": false 00:08:43.097 }, 00:08:43.097 "memory_domains": [ 00:08:43.097 { 00:08:43.097 "dma_device_id": "system", 00:08:43.097 "dma_device_type": 1 00:08:43.097 }, 00:08:43.097 { 00:08:43.097 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.097 "dma_device_type": 2 00:08:43.097 } 00:08:43.097 ], 00:08:43.097 "driver_specific": {} 00:08:43.097 } 00:08:43.097 ]' 00:08:43.097 11:19:24 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:08:43.097 11:19:24 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:43.097 11:19:24 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:08:43.097 11:19:24 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.097 11:19:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:43.097 [2024-10-07 11:19:24.709930] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:08:43.097 [2024-10-07 11:19:24.710165] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:43.097 [2024-10-07 11:19:24.710210] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:43.097 [2024-10-07 11:19:24.710227] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:43.097 [2024-10-07 11:19:24.713101] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:43.097 [2024-10-07 11:19:24.713266] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:43.097 Passthru0 00:08:43.097 11:19:24 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.097 11:19:24 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:43.097 11:19:24 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.097 11:19:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:43.097 11:19:24 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.097 11:19:24 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:43.097 { 00:08:43.097 "name": "Malloc0", 00:08:43.097 "aliases": [ 00:08:43.097 "110e5b33-4efb-49fb-9e6b-75ac9c601cb9" 00:08:43.097 ], 00:08:43.097 "product_name": "Malloc disk", 00:08:43.097 "block_size": 512, 00:08:43.097 "num_blocks": 16384, 00:08:43.097 "uuid": "110e5b33-4efb-49fb-9e6b-75ac9c601cb9", 00:08:43.097 "assigned_rate_limits": { 00:08:43.097 "rw_ios_per_sec": 0, 00:08:43.097 "rw_mbytes_per_sec": 0, 00:08:43.097 "r_mbytes_per_sec": 0, 00:08:43.097 "w_mbytes_per_sec": 0 00:08:43.097 }, 00:08:43.097 "claimed": true, 00:08:43.097 "claim_type": "exclusive_write", 00:08:43.097 "zoned": false, 00:08:43.097 "supported_io_types": { 00:08:43.097 "read": true, 00:08:43.097 "write": true, 00:08:43.097 "unmap": true, 00:08:43.097 "flush": true, 00:08:43.097 "reset": true, 00:08:43.097 "nvme_admin": false, 00:08:43.097 "nvme_io": false, 00:08:43.097 "nvme_io_md": false, 00:08:43.097 "write_zeroes": true, 00:08:43.097 "zcopy": true, 00:08:43.097 "get_zone_info": false, 00:08:43.097 "zone_management": false, 00:08:43.097 "zone_append": false, 00:08:43.097 "compare": false, 00:08:43.097 "compare_and_write": false, 00:08:43.097 "abort": true, 00:08:43.097 "seek_hole": false, 00:08:43.097 "seek_data": false, 00:08:43.097 "copy": true, 00:08:43.097 "nvme_iov_md": false 00:08:43.097 }, 00:08:43.097 "memory_domains": [ 00:08:43.097 { 00:08:43.097 "dma_device_id": "system", 00:08:43.097 "dma_device_type": 1 00:08:43.097 }, 00:08:43.097 { 00:08:43.097 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.097 "dma_device_type": 2 00:08:43.097 } 00:08:43.097 ], 00:08:43.097 "driver_specific": {} 00:08:43.097 }, 00:08:43.097 { 00:08:43.097 "name": "Passthru0", 00:08:43.097 "aliases": [ 00:08:43.097 "9b1bb2a6-7778-51c9-9d9d-9352d2e696bf" 00:08:43.097 ], 00:08:43.097 "product_name": "passthru", 00:08:43.097 "block_size": 512, 00:08:43.097 "num_blocks": 16384, 00:08:43.097 "uuid": "9b1bb2a6-7778-51c9-9d9d-9352d2e696bf", 00:08:43.097 "assigned_rate_limits": { 00:08:43.097 "rw_ios_per_sec": 0, 00:08:43.097 "rw_mbytes_per_sec": 0, 00:08:43.097 "r_mbytes_per_sec": 0, 00:08:43.097 "w_mbytes_per_sec": 0 00:08:43.097 }, 00:08:43.097 "claimed": false, 00:08:43.097 "zoned": false, 00:08:43.097 "supported_io_types": { 00:08:43.097 "read": true, 00:08:43.097 "write": true, 00:08:43.097 "unmap": true, 00:08:43.097 "flush": true, 00:08:43.097 "reset": true, 00:08:43.097 "nvme_admin": false, 00:08:43.097 "nvme_io": false, 00:08:43.097 "nvme_io_md": false, 00:08:43.097 "write_zeroes": true, 00:08:43.097 "zcopy": true, 00:08:43.097 "get_zone_info": false, 00:08:43.097 "zone_management": false, 00:08:43.097 "zone_append": false, 00:08:43.097 "compare": false, 00:08:43.097 "compare_and_write": false, 00:08:43.097 "abort": true, 00:08:43.097 "seek_hole": false, 00:08:43.097 "seek_data": false, 00:08:43.097 "copy": true, 00:08:43.098 "nvme_iov_md": false 00:08:43.098 }, 00:08:43.098 "memory_domains": [ 00:08:43.098 { 00:08:43.098 "dma_device_id": "system", 00:08:43.098 "dma_device_type": 1 00:08:43.098 }, 00:08:43.098 { 00:08:43.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.098 "dma_device_type": 2 00:08:43.098 } 00:08:43.098 ], 00:08:43.098 "driver_specific": { 00:08:43.098 "passthru": { 00:08:43.098 "name": "Passthru0", 00:08:43.098 "base_bdev_name": "Malloc0" 00:08:43.098 } 00:08:43.098 } 00:08:43.098 } 00:08:43.098 ]' 00:08:43.098 11:19:24 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:08:43.098 11:19:24 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:43.098 11:19:24 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:43.098 11:19:24 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.098 11:19:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:43.357 11:19:24 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.357 11:19:24 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:08:43.357 11:19:24 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.357 11:19:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:43.357 11:19:24 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.357 11:19:24 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:43.357 11:19:24 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.357 11:19:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:43.357 11:19:24 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.357 11:19:24 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:43.357 11:19:24 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:08:43.357 ************************************ 00:08:43.357 END TEST rpc_integrity 00:08:43.357 ************************************ 00:08:43.357 11:19:24 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:43.357 00:08:43.357 real 0m0.371s 00:08:43.357 user 0m0.190s 00:08:43.357 sys 0m0.069s 00:08:43.357 11:19:24 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:43.357 11:19:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:43.357 11:19:24 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:08:43.357 11:19:24 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:43.357 11:19:24 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:43.357 11:19:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:43.357 ************************************ 00:08:43.357 START TEST rpc_plugins 00:08:43.357 ************************************ 00:08:43.357 11:19:24 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:08:43.357 11:19:24 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:08:43.357 11:19:24 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.357 11:19:24 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:43.357 11:19:24 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.357 11:19:25 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:08:43.357 11:19:25 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:08:43.357 11:19:25 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.357 11:19:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:43.357 11:19:25 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.357 11:19:25 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:08:43.357 { 00:08:43.357 "name": "Malloc1", 00:08:43.357 "aliases": [ 00:08:43.357 "309d042c-94f8-44d8-b654-cc38e7f440e8" 00:08:43.357 ], 00:08:43.357 "product_name": "Malloc disk", 00:08:43.357 "block_size": 4096, 00:08:43.357 "num_blocks": 256, 00:08:43.357 "uuid": "309d042c-94f8-44d8-b654-cc38e7f440e8", 00:08:43.357 "assigned_rate_limits": { 00:08:43.357 "rw_ios_per_sec": 0, 00:08:43.357 "rw_mbytes_per_sec": 0, 00:08:43.357 "r_mbytes_per_sec": 0, 00:08:43.357 "w_mbytes_per_sec": 0 00:08:43.357 }, 00:08:43.357 "claimed": false, 00:08:43.357 "zoned": false, 00:08:43.357 "supported_io_types": { 00:08:43.357 "read": true, 00:08:43.357 "write": true, 00:08:43.357 "unmap": true, 00:08:43.357 "flush": true, 00:08:43.357 "reset": true, 00:08:43.357 "nvme_admin": false, 00:08:43.357 "nvme_io": false, 00:08:43.357 "nvme_io_md": false, 00:08:43.357 "write_zeroes": true, 00:08:43.357 "zcopy": true, 00:08:43.357 "get_zone_info": false, 00:08:43.357 "zone_management": false, 00:08:43.357 "zone_append": false, 00:08:43.357 "compare": false, 00:08:43.357 "compare_and_write": false, 00:08:43.357 "abort": true, 00:08:43.357 "seek_hole": false, 00:08:43.357 "seek_data": false, 00:08:43.357 "copy": true, 00:08:43.357 "nvme_iov_md": false 00:08:43.357 }, 00:08:43.357 "memory_domains": [ 00:08:43.357 { 00:08:43.357 "dma_device_id": "system", 00:08:43.357 "dma_device_type": 1 00:08:43.357 }, 00:08:43.357 { 00:08:43.357 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.357 "dma_device_type": 2 00:08:43.357 } 00:08:43.357 ], 00:08:43.357 "driver_specific": {} 00:08:43.357 } 00:08:43.357 ]' 00:08:43.357 11:19:25 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:08:43.616 11:19:25 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:08:43.616 11:19:25 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:08:43.616 11:19:25 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.616 11:19:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:43.616 11:19:25 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.616 11:19:25 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:08:43.616 11:19:25 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.616 11:19:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:43.616 11:19:25 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.616 11:19:25 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:08:43.616 11:19:25 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:08:43.616 ************************************ 00:08:43.616 END TEST rpc_plugins 00:08:43.616 ************************************ 00:08:43.616 11:19:25 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:08:43.616 00:08:43.616 real 0m0.172s 00:08:43.616 user 0m0.092s 00:08:43.616 sys 0m0.029s 00:08:43.616 11:19:25 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:43.616 11:19:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:43.616 11:19:25 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:08:43.616 11:19:25 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:43.616 11:19:25 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:43.616 11:19:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:43.616 ************************************ 00:08:43.616 START TEST rpc_trace_cmd_test 00:08:43.616 ************************************ 00:08:43.616 11:19:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:08:43.616 11:19:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:08:43.616 11:19:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:08:43.616 11:19:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.616 11:19:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.616 11:19:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.616 11:19:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:08:43.616 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58251", 00:08:43.616 "tpoint_group_mask": "0x8", 00:08:43.616 "iscsi_conn": { 00:08:43.616 "mask": "0x2", 00:08:43.616 "tpoint_mask": "0x0" 00:08:43.616 }, 00:08:43.616 "scsi": { 00:08:43.616 "mask": "0x4", 00:08:43.616 "tpoint_mask": "0x0" 00:08:43.616 }, 00:08:43.616 "bdev": { 00:08:43.616 "mask": "0x8", 00:08:43.616 "tpoint_mask": "0xffffffffffffffff" 00:08:43.616 }, 00:08:43.616 "nvmf_rdma": { 00:08:43.616 "mask": "0x10", 00:08:43.616 "tpoint_mask": "0x0" 00:08:43.616 }, 00:08:43.616 "nvmf_tcp": { 00:08:43.616 "mask": "0x20", 00:08:43.616 "tpoint_mask": "0x0" 00:08:43.616 }, 00:08:43.616 "ftl": { 00:08:43.616 "mask": "0x40", 00:08:43.616 "tpoint_mask": "0x0" 00:08:43.616 }, 00:08:43.616 "blobfs": { 00:08:43.616 "mask": "0x80", 00:08:43.616 "tpoint_mask": "0x0" 00:08:43.616 }, 00:08:43.616 "dsa": { 00:08:43.616 "mask": "0x200", 00:08:43.616 "tpoint_mask": "0x0" 00:08:43.616 }, 00:08:43.616 "thread": { 00:08:43.616 "mask": "0x400", 00:08:43.616 "tpoint_mask": "0x0" 00:08:43.616 }, 00:08:43.616 "nvme_pcie": { 00:08:43.616 "mask": "0x800", 00:08:43.616 "tpoint_mask": "0x0" 00:08:43.616 }, 00:08:43.616 "iaa": { 00:08:43.617 "mask": "0x1000", 00:08:43.617 "tpoint_mask": "0x0" 00:08:43.617 }, 00:08:43.617 "nvme_tcp": { 00:08:43.617 "mask": "0x2000", 00:08:43.617 "tpoint_mask": "0x0" 00:08:43.617 }, 00:08:43.617 "bdev_nvme": { 00:08:43.617 "mask": "0x4000", 00:08:43.617 "tpoint_mask": "0x0" 00:08:43.617 }, 00:08:43.617 "sock": { 00:08:43.617 "mask": "0x8000", 00:08:43.617 "tpoint_mask": "0x0" 00:08:43.617 }, 00:08:43.617 "blob": { 00:08:43.617 "mask": "0x10000", 00:08:43.617 "tpoint_mask": "0x0" 00:08:43.617 }, 00:08:43.617 "bdev_raid": { 00:08:43.617 "mask": "0x20000", 00:08:43.617 "tpoint_mask": "0x0" 00:08:43.617 }, 00:08:43.617 "scheduler": { 00:08:43.617 "mask": "0x40000", 00:08:43.617 "tpoint_mask": "0x0" 00:08:43.617 } 00:08:43.617 }' 00:08:43.617 11:19:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:08:43.617 11:19:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:08:43.617 11:19:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:08:43.875 11:19:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:08:43.875 11:19:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:08:43.875 11:19:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:08:43.875 11:19:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:08:43.875 11:19:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:08:43.875 11:19:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:08:43.875 11:19:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:08:43.875 00:08:43.875 real 0m0.222s 00:08:43.875 user 0m0.174s 00:08:43.875 sys 0m0.039s 00:08:43.875 11:19:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:43.875 11:19:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.875 ************************************ 00:08:43.875 END TEST rpc_trace_cmd_test 00:08:43.875 ************************************ 00:08:43.875 11:19:25 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:08:43.875 11:19:25 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:08:43.875 11:19:25 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:08:43.875 11:19:25 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:43.875 11:19:25 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:43.875 11:19:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:43.875 ************************************ 00:08:43.875 START TEST rpc_daemon_integrity 00:08:43.875 ************************************ 00:08:43.875 11:19:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:08:43.875 11:19:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:43.875 11:19:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.875 11:19:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:43.875 11:19:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.875 11:19:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:43.875 11:19:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:08:44.134 11:19:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:44.134 11:19:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:44.134 11:19:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.134 11:19:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:44.134 11:19:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.134 11:19:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:08:44.134 11:19:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:44.134 11:19:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.134 11:19:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:44.134 11:19:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.134 11:19:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:44.134 { 00:08:44.134 "name": "Malloc2", 00:08:44.134 "aliases": [ 00:08:44.134 "fc13918f-52a9-4c1d-985c-6188b0c13ab6" 00:08:44.134 ], 00:08:44.134 "product_name": "Malloc disk", 00:08:44.134 "block_size": 512, 00:08:44.134 "num_blocks": 16384, 00:08:44.134 "uuid": "fc13918f-52a9-4c1d-985c-6188b0c13ab6", 00:08:44.134 "assigned_rate_limits": { 00:08:44.134 "rw_ios_per_sec": 0, 00:08:44.134 "rw_mbytes_per_sec": 0, 00:08:44.134 "r_mbytes_per_sec": 0, 00:08:44.134 "w_mbytes_per_sec": 0 00:08:44.134 }, 00:08:44.134 "claimed": false, 00:08:44.134 "zoned": false, 00:08:44.134 "supported_io_types": { 00:08:44.134 "read": true, 00:08:44.134 "write": true, 00:08:44.134 "unmap": true, 00:08:44.134 "flush": true, 00:08:44.134 "reset": true, 00:08:44.134 "nvme_admin": false, 00:08:44.134 "nvme_io": false, 00:08:44.134 "nvme_io_md": false, 00:08:44.134 "write_zeroes": true, 00:08:44.134 "zcopy": true, 00:08:44.134 "get_zone_info": false, 00:08:44.134 "zone_management": false, 00:08:44.134 "zone_append": false, 00:08:44.134 "compare": false, 00:08:44.134 "compare_and_write": false, 00:08:44.134 "abort": true, 00:08:44.134 "seek_hole": false, 00:08:44.134 "seek_data": false, 00:08:44.134 "copy": true, 00:08:44.134 "nvme_iov_md": false 00:08:44.134 }, 00:08:44.134 "memory_domains": [ 00:08:44.134 { 00:08:44.134 "dma_device_id": "system", 00:08:44.134 "dma_device_type": 1 00:08:44.134 }, 00:08:44.134 { 00:08:44.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.134 "dma_device_type": 2 00:08:44.134 } 00:08:44.134 ], 00:08:44.134 "driver_specific": {} 00:08:44.134 } 00:08:44.134 ]' 00:08:44.134 11:19:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:08:44.134 11:19:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:44.134 11:19:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:08:44.134 11:19:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.134 11:19:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:44.134 [2024-10-07 11:19:25.696017] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:08:44.134 [2024-10-07 11:19:25.696095] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:44.134 [2024-10-07 11:19:25.696121] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:44.134 [2024-10-07 11:19:25.696137] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:44.134 [2024-10-07 11:19:25.698919] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:44.134 [2024-10-07 11:19:25.699437] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:44.134 Passthru0 00:08:44.135 11:19:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.135 11:19:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:44.135 11:19:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.135 11:19:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:44.135 11:19:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.135 11:19:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:44.135 { 00:08:44.135 "name": "Malloc2", 00:08:44.135 "aliases": [ 00:08:44.135 "fc13918f-52a9-4c1d-985c-6188b0c13ab6" 00:08:44.135 ], 00:08:44.135 "product_name": "Malloc disk", 00:08:44.135 "block_size": 512, 00:08:44.135 "num_blocks": 16384, 00:08:44.135 "uuid": "fc13918f-52a9-4c1d-985c-6188b0c13ab6", 00:08:44.135 "assigned_rate_limits": { 00:08:44.135 "rw_ios_per_sec": 0, 00:08:44.135 "rw_mbytes_per_sec": 0, 00:08:44.135 "r_mbytes_per_sec": 0, 00:08:44.135 "w_mbytes_per_sec": 0 00:08:44.135 }, 00:08:44.135 "claimed": true, 00:08:44.135 "claim_type": "exclusive_write", 00:08:44.135 "zoned": false, 00:08:44.135 "supported_io_types": { 00:08:44.135 "read": true, 00:08:44.135 "write": true, 00:08:44.135 "unmap": true, 00:08:44.135 "flush": true, 00:08:44.135 "reset": true, 00:08:44.135 "nvme_admin": false, 00:08:44.135 "nvme_io": false, 00:08:44.135 "nvme_io_md": false, 00:08:44.135 "write_zeroes": true, 00:08:44.135 "zcopy": true, 00:08:44.135 "get_zone_info": false, 00:08:44.135 "zone_management": false, 00:08:44.135 "zone_append": false, 00:08:44.135 "compare": false, 00:08:44.135 "compare_and_write": false, 00:08:44.135 "abort": true, 00:08:44.135 "seek_hole": false, 00:08:44.135 "seek_data": false, 00:08:44.135 "copy": true, 00:08:44.135 "nvme_iov_md": false 00:08:44.135 }, 00:08:44.135 "memory_domains": [ 00:08:44.135 { 00:08:44.135 "dma_device_id": "system", 00:08:44.135 "dma_device_type": 1 00:08:44.135 }, 00:08:44.135 { 00:08:44.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.135 "dma_device_type": 2 00:08:44.135 } 00:08:44.135 ], 00:08:44.135 "driver_specific": {} 00:08:44.135 }, 00:08:44.135 { 00:08:44.135 "name": "Passthru0", 00:08:44.135 "aliases": [ 00:08:44.135 "20ce1174-72aa-5295-99c2-043535f4a49f" 00:08:44.135 ], 00:08:44.135 "product_name": "passthru", 00:08:44.135 "block_size": 512, 00:08:44.135 "num_blocks": 16384, 00:08:44.135 "uuid": "20ce1174-72aa-5295-99c2-043535f4a49f", 00:08:44.135 "assigned_rate_limits": { 00:08:44.135 "rw_ios_per_sec": 0, 00:08:44.135 "rw_mbytes_per_sec": 0, 00:08:44.135 "r_mbytes_per_sec": 0, 00:08:44.135 "w_mbytes_per_sec": 0 00:08:44.135 }, 00:08:44.135 "claimed": false, 00:08:44.135 "zoned": false, 00:08:44.135 "supported_io_types": { 00:08:44.135 "read": true, 00:08:44.135 "write": true, 00:08:44.135 "unmap": true, 00:08:44.135 "flush": true, 00:08:44.135 "reset": true, 00:08:44.135 "nvme_admin": false, 00:08:44.135 "nvme_io": false, 00:08:44.135 "nvme_io_md": false, 00:08:44.135 "write_zeroes": true, 00:08:44.135 "zcopy": true, 00:08:44.135 "get_zone_info": false, 00:08:44.135 "zone_management": false, 00:08:44.135 "zone_append": false, 00:08:44.135 "compare": false, 00:08:44.135 "compare_and_write": false, 00:08:44.135 "abort": true, 00:08:44.135 "seek_hole": false, 00:08:44.135 "seek_data": false, 00:08:44.135 "copy": true, 00:08:44.135 "nvme_iov_md": false 00:08:44.135 }, 00:08:44.135 "memory_domains": [ 00:08:44.135 { 00:08:44.135 "dma_device_id": "system", 00:08:44.135 "dma_device_type": 1 00:08:44.135 }, 00:08:44.135 { 00:08:44.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.135 "dma_device_type": 2 00:08:44.135 } 00:08:44.135 ], 00:08:44.135 "driver_specific": { 00:08:44.135 "passthru": { 00:08:44.135 "name": "Passthru0", 00:08:44.135 "base_bdev_name": "Malloc2" 00:08:44.135 } 00:08:44.135 } 00:08:44.135 } 00:08:44.135 ]' 00:08:44.135 11:19:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:08:44.135 11:19:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:44.135 11:19:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:44.135 11:19:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.135 11:19:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:44.135 11:19:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.135 11:19:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:08:44.135 11:19:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.135 11:19:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:44.135 11:19:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.135 11:19:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:44.135 11:19:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.135 11:19:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:44.394 11:19:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.394 11:19:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:44.394 11:19:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:08:44.394 ************************************ 00:08:44.394 END TEST rpc_daemon_integrity 00:08:44.394 ************************************ 00:08:44.394 11:19:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:44.394 00:08:44.394 real 0m0.376s 00:08:44.394 user 0m0.212s 00:08:44.394 sys 0m0.062s 00:08:44.394 11:19:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:44.394 11:19:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:44.394 11:19:25 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:08:44.394 11:19:25 rpc -- rpc/rpc.sh@84 -- # killprocess 58251 00:08:44.394 11:19:25 rpc -- common/autotest_common.sh@950 -- # '[' -z 58251 ']' 00:08:44.394 11:19:25 rpc -- common/autotest_common.sh@954 -- # kill -0 58251 00:08:44.394 11:19:25 rpc -- common/autotest_common.sh@955 -- # uname 00:08:44.394 11:19:25 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:44.394 11:19:25 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58251 00:08:44.394 killing process with pid 58251 00:08:44.394 11:19:26 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:44.394 11:19:26 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:44.394 11:19:26 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58251' 00:08:44.394 11:19:26 rpc -- common/autotest_common.sh@969 -- # kill 58251 00:08:44.394 11:19:26 rpc -- common/autotest_common.sh@974 -- # wait 58251 00:08:47.743 00:08:47.743 real 0m5.967s 00:08:47.743 user 0m6.475s 00:08:47.743 sys 0m1.069s 00:08:47.743 11:19:28 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:47.743 ************************************ 00:08:47.743 END TEST rpc 00:08:47.743 ************************************ 00:08:47.743 11:19:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:47.743 11:19:28 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:08:47.743 11:19:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:47.743 11:19:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:47.743 11:19:28 -- common/autotest_common.sh@10 -- # set +x 00:08:47.743 ************************************ 00:08:47.743 START TEST skip_rpc 00:08:47.743 ************************************ 00:08:47.743 11:19:28 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:08:47.743 * Looking for test storage... 00:08:47.743 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:08:47.743 11:19:28 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:47.743 11:19:28 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:08:47.743 11:19:28 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:47.743 11:19:29 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:47.743 11:19:29 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:47.743 11:19:29 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:47.743 11:19:29 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:47.743 11:19:29 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:47.743 11:19:29 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:47.743 11:19:29 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:47.743 11:19:29 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:47.743 11:19:29 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:47.743 11:19:29 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:47.743 11:19:29 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:47.743 11:19:29 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:47.743 11:19:29 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:47.743 11:19:29 skip_rpc -- scripts/common.sh@345 -- # : 1 00:08:47.743 11:19:29 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:47.743 11:19:29 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:47.743 11:19:29 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:08:47.743 11:19:29 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:08:47.743 11:19:29 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:47.743 11:19:29 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:08:47.743 11:19:29 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:47.743 11:19:29 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:08:47.743 11:19:29 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:08:47.743 11:19:29 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:47.743 11:19:29 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:08:47.743 11:19:29 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:47.743 11:19:29 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:47.743 11:19:29 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:47.743 11:19:29 skip_rpc -- scripts/common.sh@368 -- # return 0 00:08:47.743 11:19:29 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:47.743 11:19:29 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:47.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.743 --rc genhtml_branch_coverage=1 00:08:47.743 --rc genhtml_function_coverage=1 00:08:47.743 --rc genhtml_legend=1 00:08:47.743 --rc geninfo_all_blocks=1 00:08:47.743 --rc geninfo_unexecuted_blocks=1 00:08:47.743 00:08:47.743 ' 00:08:47.743 11:19:29 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:47.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.743 --rc genhtml_branch_coverage=1 00:08:47.743 --rc genhtml_function_coverage=1 00:08:47.743 --rc genhtml_legend=1 00:08:47.743 --rc geninfo_all_blocks=1 00:08:47.743 --rc geninfo_unexecuted_blocks=1 00:08:47.743 00:08:47.743 ' 00:08:47.743 11:19:29 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:47.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.743 --rc genhtml_branch_coverage=1 00:08:47.743 --rc genhtml_function_coverage=1 00:08:47.743 --rc genhtml_legend=1 00:08:47.743 --rc geninfo_all_blocks=1 00:08:47.743 --rc geninfo_unexecuted_blocks=1 00:08:47.743 00:08:47.743 ' 00:08:47.743 11:19:29 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:47.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.743 --rc genhtml_branch_coverage=1 00:08:47.743 --rc genhtml_function_coverage=1 00:08:47.743 --rc genhtml_legend=1 00:08:47.743 --rc geninfo_all_blocks=1 00:08:47.743 --rc geninfo_unexecuted_blocks=1 00:08:47.743 00:08:47.743 ' 00:08:47.743 11:19:29 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:47.743 11:19:29 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:08:47.743 11:19:29 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:08:47.743 11:19:29 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:47.743 11:19:29 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:47.743 11:19:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:47.743 ************************************ 00:08:47.743 START TEST skip_rpc 00:08:47.743 ************************************ 00:08:47.743 11:19:29 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:08:47.743 11:19:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58486 00:08:47.743 11:19:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:08:47.743 11:19:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:47.743 11:19:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:08:47.743 [2024-10-07 11:19:29.222988] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:08:47.743 [2024-10-07 11:19:29.223294] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58486 ] 00:08:47.743 [2024-10-07 11:19:29.400994] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.001 [2024-10-07 11:19:29.633798] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.272 11:19:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:08:53.272 11:19:34 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:08:53.272 11:19:34 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:08:53.272 11:19:34 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:53.272 11:19:34 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:53.272 11:19:34 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:53.272 11:19:34 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:53.272 11:19:34 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:08:53.272 11:19:34 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.272 11:19:34 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.272 11:19:34 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:53.272 11:19:34 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:08:53.272 11:19:34 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:53.272 11:19:34 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:53.272 11:19:34 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:53.272 11:19:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:08:53.272 11:19:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58486 00:08:53.272 11:19:34 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 58486 ']' 00:08:53.272 11:19:34 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 58486 00:08:53.272 11:19:34 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:08:53.272 11:19:34 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:53.272 11:19:34 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58486 00:08:53.272 killing process with pid 58486 00:08:53.272 11:19:34 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:53.272 11:19:34 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:53.272 11:19:34 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58486' 00:08:53.272 11:19:34 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 58486 00:08:53.272 11:19:34 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 58486 00:08:55.175 00:08:55.175 real 0m7.724s 00:08:55.175 user 0m7.181s 00:08:55.175 sys 0m0.448s 00:08:55.175 11:19:36 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:55.175 11:19:36 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:55.175 ************************************ 00:08:55.175 END TEST skip_rpc 00:08:55.175 ************************************ 00:08:55.175 11:19:36 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:08:55.175 11:19:36 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:55.175 11:19:36 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:55.175 11:19:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:55.175 ************************************ 00:08:55.175 START TEST skip_rpc_with_json 00:08:55.175 ************************************ 00:08:55.175 11:19:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:08:55.175 11:19:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:08:55.175 11:19:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58601 00:08:55.175 11:19:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:55.175 11:19:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:55.175 11:19:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58601 00:08:55.175 11:19:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 58601 ']' 00:08:55.175 11:19:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:55.175 11:19:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:55.175 11:19:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:55.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:55.175 11:19:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:55.175 11:19:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:55.434 [2024-10-07 11:19:36.982012] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:08:55.434 [2024-10-07 11:19:36.982317] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58601 ] 00:08:55.692 [2024-10-07 11:19:37.155675] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.692 [2024-10-07 11:19:37.377595] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.626 11:19:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:56.626 11:19:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:08:56.626 11:19:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:08:56.626 11:19:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.626 11:19:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:56.626 [2024-10-07 11:19:38.268107] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:08:56.626 request: 00:08:56.626 { 00:08:56.626 "trtype": "tcp", 00:08:56.626 "method": "nvmf_get_transports", 00:08:56.626 "req_id": 1 00:08:56.626 } 00:08:56.626 Got JSON-RPC error response 00:08:56.626 response: 00:08:56.626 { 00:08:56.626 "code": -19, 00:08:56.626 "message": "No such device" 00:08:56.626 } 00:08:56.626 11:19:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:56.626 11:19:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:08:56.626 11:19:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.626 11:19:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:56.626 [2024-10-07 11:19:38.284208] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:56.626 11:19:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.626 11:19:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:08:56.626 11:19:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.626 11:19:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:56.882 11:19:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.882 11:19:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:56.882 { 00:08:56.882 "subsystems": [ 00:08:56.882 { 00:08:56.882 "subsystem": "fsdev", 00:08:56.882 "config": [ 00:08:56.882 { 00:08:56.882 "method": "fsdev_set_opts", 00:08:56.882 "params": { 00:08:56.882 "fsdev_io_pool_size": 65535, 00:08:56.882 "fsdev_io_cache_size": 256 00:08:56.882 } 00:08:56.882 } 00:08:56.882 ] 00:08:56.882 }, 00:08:56.882 { 00:08:56.882 "subsystem": "keyring", 00:08:56.882 "config": [] 00:08:56.882 }, 00:08:56.882 { 00:08:56.882 "subsystem": "iobuf", 00:08:56.882 "config": [ 00:08:56.882 { 00:08:56.883 "method": "iobuf_set_options", 00:08:56.883 "params": { 00:08:56.883 "small_pool_count": 8192, 00:08:56.883 "large_pool_count": 1024, 00:08:56.883 "small_bufsize": 8192, 00:08:56.883 "large_bufsize": 135168 00:08:56.883 } 00:08:56.883 } 00:08:56.883 ] 00:08:56.883 }, 00:08:56.883 { 00:08:56.883 "subsystem": "sock", 00:08:56.883 "config": [ 00:08:56.883 { 00:08:56.883 "method": "sock_set_default_impl", 00:08:56.883 "params": { 00:08:56.883 "impl_name": "posix" 00:08:56.883 } 00:08:56.883 }, 00:08:56.883 { 00:08:56.883 "method": "sock_impl_set_options", 00:08:56.883 "params": { 00:08:56.883 "impl_name": "ssl", 00:08:56.883 "recv_buf_size": 4096, 00:08:56.883 "send_buf_size": 4096, 00:08:56.883 "enable_recv_pipe": true, 00:08:56.883 "enable_quickack": false, 00:08:56.883 "enable_placement_id": 0, 00:08:56.883 "enable_zerocopy_send_server": true, 00:08:56.883 "enable_zerocopy_send_client": false, 00:08:56.883 "zerocopy_threshold": 0, 00:08:56.883 "tls_version": 0, 00:08:56.883 "enable_ktls": false 00:08:56.883 } 00:08:56.883 }, 00:08:56.883 { 00:08:56.883 "method": "sock_impl_set_options", 00:08:56.883 "params": { 00:08:56.883 "impl_name": "posix", 00:08:56.883 "recv_buf_size": 2097152, 00:08:56.883 "send_buf_size": 2097152, 00:08:56.883 "enable_recv_pipe": true, 00:08:56.883 "enable_quickack": false, 00:08:56.883 "enable_placement_id": 0, 00:08:56.883 "enable_zerocopy_send_server": true, 00:08:56.883 "enable_zerocopy_send_client": false, 00:08:56.883 "zerocopy_threshold": 0, 00:08:56.883 "tls_version": 0, 00:08:56.883 "enable_ktls": false 00:08:56.883 } 00:08:56.883 } 00:08:56.883 ] 00:08:56.883 }, 00:08:56.883 { 00:08:56.883 "subsystem": "vmd", 00:08:56.883 "config": [] 00:08:56.883 }, 00:08:56.883 { 00:08:56.883 "subsystem": "accel", 00:08:56.883 "config": [ 00:08:56.883 { 00:08:56.883 "method": "accel_set_options", 00:08:56.883 "params": { 00:08:56.883 "small_cache_size": 128, 00:08:56.883 "large_cache_size": 16, 00:08:56.883 "task_count": 2048, 00:08:56.883 "sequence_count": 2048, 00:08:56.883 "buf_count": 2048 00:08:56.883 } 00:08:56.883 } 00:08:56.883 ] 00:08:56.883 }, 00:08:56.883 { 00:08:56.883 "subsystem": "bdev", 00:08:56.883 "config": [ 00:08:56.883 { 00:08:56.883 "method": "bdev_set_options", 00:08:56.883 "params": { 00:08:56.883 "bdev_io_pool_size": 65535, 00:08:56.883 "bdev_io_cache_size": 256, 00:08:56.883 "bdev_auto_examine": true, 00:08:56.883 "iobuf_small_cache_size": 128, 00:08:56.883 "iobuf_large_cache_size": 16 00:08:56.883 } 00:08:56.883 }, 00:08:56.883 { 00:08:56.883 "method": "bdev_raid_set_options", 00:08:56.883 "params": { 00:08:56.883 "process_window_size_kb": 1024, 00:08:56.883 "process_max_bandwidth_mb_sec": 0 00:08:56.883 } 00:08:56.883 }, 00:08:56.883 { 00:08:56.883 "method": "bdev_iscsi_set_options", 00:08:56.883 "params": { 00:08:56.883 "timeout_sec": 30 00:08:56.883 } 00:08:56.883 }, 00:08:56.883 { 00:08:56.883 "method": "bdev_nvme_set_options", 00:08:56.883 "params": { 00:08:56.883 "action_on_timeout": "none", 00:08:56.883 "timeout_us": 0, 00:08:56.883 "timeout_admin_us": 0, 00:08:56.883 "keep_alive_timeout_ms": 10000, 00:08:56.883 "arbitration_burst": 0, 00:08:56.883 "low_priority_weight": 0, 00:08:56.883 "medium_priority_weight": 0, 00:08:56.883 "high_priority_weight": 0, 00:08:56.883 "nvme_adminq_poll_period_us": 10000, 00:08:56.883 "nvme_ioq_poll_period_us": 0, 00:08:56.883 "io_queue_requests": 0, 00:08:56.883 "delay_cmd_submit": true, 00:08:56.883 "transport_retry_count": 4, 00:08:56.883 "bdev_retry_count": 3, 00:08:56.883 "transport_ack_timeout": 0, 00:08:56.883 "ctrlr_loss_timeout_sec": 0, 00:08:56.883 "reconnect_delay_sec": 0, 00:08:56.883 "fast_io_fail_timeout_sec": 0, 00:08:56.883 "disable_auto_failback": false, 00:08:56.883 "generate_uuids": false, 00:08:56.883 "transport_tos": 0, 00:08:56.883 "nvme_error_stat": false, 00:08:56.883 "rdma_srq_size": 0, 00:08:56.883 "io_path_stat": false, 00:08:56.883 "allow_accel_sequence": false, 00:08:56.883 "rdma_max_cq_size": 0, 00:08:56.883 "rdma_cm_event_timeout_ms": 0, 00:08:56.883 "dhchap_digests": [ 00:08:56.883 "sha256", 00:08:56.883 "sha384", 00:08:56.883 "sha512" 00:08:56.883 ], 00:08:56.883 "dhchap_dhgroups": [ 00:08:56.883 "null", 00:08:56.883 "ffdhe2048", 00:08:56.883 "ffdhe3072", 00:08:56.883 "ffdhe4096", 00:08:56.883 "ffdhe6144", 00:08:56.883 "ffdhe8192" 00:08:56.883 ] 00:08:56.883 } 00:08:56.883 }, 00:08:56.883 { 00:08:56.883 "method": "bdev_nvme_set_hotplug", 00:08:56.883 "params": { 00:08:56.883 "period_us": 100000, 00:08:56.883 "enable": false 00:08:56.883 } 00:08:56.883 }, 00:08:56.883 { 00:08:56.883 "method": "bdev_wait_for_examine" 00:08:56.883 } 00:08:56.883 ] 00:08:56.883 }, 00:08:56.883 { 00:08:56.883 "subsystem": "scsi", 00:08:56.883 "config": null 00:08:56.883 }, 00:08:56.883 { 00:08:56.883 "subsystem": "scheduler", 00:08:56.883 "config": [ 00:08:56.883 { 00:08:56.883 "method": "framework_set_scheduler", 00:08:56.883 "params": { 00:08:56.883 "name": "static" 00:08:56.883 } 00:08:56.883 } 00:08:56.883 ] 00:08:56.883 }, 00:08:56.883 { 00:08:56.883 "subsystem": "vhost_scsi", 00:08:56.883 "config": [] 00:08:56.883 }, 00:08:56.883 { 00:08:56.883 "subsystem": "vhost_blk", 00:08:56.883 "config": [] 00:08:56.883 }, 00:08:56.883 { 00:08:56.883 "subsystem": "ublk", 00:08:56.883 "config": [] 00:08:56.883 }, 00:08:56.883 { 00:08:56.883 "subsystem": "nbd", 00:08:56.883 "config": [] 00:08:56.883 }, 00:08:56.883 { 00:08:56.883 "subsystem": "nvmf", 00:08:56.883 "config": [ 00:08:56.883 { 00:08:56.883 "method": "nvmf_set_config", 00:08:56.883 "params": { 00:08:56.883 "discovery_filter": "match_any", 00:08:56.883 "admin_cmd_passthru": { 00:08:56.883 "identify_ctrlr": false 00:08:56.883 }, 00:08:56.883 "dhchap_digests": [ 00:08:56.883 "sha256", 00:08:56.883 "sha384", 00:08:56.883 "sha512" 00:08:56.883 ], 00:08:56.883 "dhchap_dhgroups": [ 00:08:56.883 "null", 00:08:56.883 "ffdhe2048", 00:08:56.883 "ffdhe3072", 00:08:56.883 "ffdhe4096", 00:08:56.883 "ffdhe6144", 00:08:56.883 "ffdhe8192" 00:08:56.883 ] 00:08:56.883 } 00:08:56.883 }, 00:08:56.883 { 00:08:56.883 "method": "nvmf_set_max_subsystems", 00:08:56.883 "params": { 00:08:56.883 "max_subsystems": 1024 00:08:56.883 } 00:08:56.883 }, 00:08:56.883 { 00:08:56.883 "method": "nvmf_set_crdt", 00:08:56.883 "params": { 00:08:56.883 "crdt1": 0, 00:08:56.883 "crdt2": 0, 00:08:56.883 "crdt3": 0 00:08:56.883 } 00:08:56.883 }, 00:08:56.883 { 00:08:56.883 "method": "nvmf_create_transport", 00:08:56.883 "params": { 00:08:56.883 "trtype": "TCP", 00:08:56.883 "max_queue_depth": 128, 00:08:56.883 "max_io_qpairs_per_ctrlr": 127, 00:08:56.883 "in_capsule_data_size": 4096, 00:08:56.883 "max_io_size": 131072, 00:08:56.883 "io_unit_size": 131072, 00:08:56.883 "max_aq_depth": 128, 00:08:56.883 "num_shared_buffers": 511, 00:08:56.883 "buf_cache_size": 4294967295, 00:08:56.883 "dif_insert_or_strip": false, 00:08:56.883 "zcopy": false, 00:08:56.883 "c2h_success": true, 00:08:56.883 "sock_priority": 0, 00:08:56.883 "abort_timeout_sec": 1, 00:08:56.883 "ack_timeout": 0, 00:08:56.883 "data_wr_pool_size": 0 00:08:56.883 } 00:08:56.883 } 00:08:56.883 ] 00:08:56.883 }, 00:08:56.883 { 00:08:56.883 "subsystem": "iscsi", 00:08:56.883 "config": [ 00:08:56.883 { 00:08:56.883 "method": "iscsi_set_options", 00:08:56.883 "params": { 00:08:56.883 "node_base": "iqn.2016-06.io.spdk", 00:08:56.883 "max_sessions": 128, 00:08:56.883 "max_connections_per_session": 2, 00:08:56.883 "max_queue_depth": 64, 00:08:56.883 "default_time2wait": 2, 00:08:56.883 "default_time2retain": 20, 00:08:56.883 "first_burst_length": 8192, 00:08:56.883 "immediate_data": true, 00:08:56.883 "allow_duplicated_isid": false, 00:08:56.883 "error_recovery_level": 0, 00:08:56.883 "nop_timeout": 60, 00:08:56.883 "nop_in_interval": 30, 00:08:56.883 "disable_chap": false, 00:08:56.883 "require_chap": false, 00:08:56.883 "mutual_chap": false, 00:08:56.883 "chap_group": 0, 00:08:56.883 "max_large_datain_per_connection": 64, 00:08:56.883 "max_r2t_per_connection": 4, 00:08:56.883 "pdu_pool_size": 36864, 00:08:56.883 "immediate_data_pool_size": 16384, 00:08:56.883 "data_out_pool_size": 2048 00:08:56.883 } 00:08:56.883 } 00:08:56.883 ] 00:08:56.883 } 00:08:56.883 ] 00:08:56.883 } 00:08:56.883 11:19:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:56.883 11:19:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58601 00:08:56.883 11:19:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 58601 ']' 00:08:56.883 11:19:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 58601 00:08:56.883 11:19:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:08:56.883 11:19:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:56.883 11:19:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58601 00:08:56.883 killing process with pid 58601 00:08:56.883 11:19:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:56.883 11:19:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:56.883 11:19:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58601' 00:08:56.883 11:19:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 58601 00:08:56.883 11:19:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 58601 00:09:00.178 11:19:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58657 00:09:00.178 11:19:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:00.178 11:19:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:09:05.470 11:19:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58657 00:09:05.470 11:19:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 58657 ']' 00:09:05.470 11:19:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 58657 00:09:05.470 11:19:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:09:05.470 11:19:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:05.470 11:19:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58657 00:09:05.470 killing process with pid 58657 00:09:05.470 11:19:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:05.470 11:19:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:05.470 11:19:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58657' 00:09:05.470 11:19:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 58657 00:09:05.470 11:19:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 58657 00:09:07.373 11:19:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:07.373 11:19:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:07.373 00:09:07.373 real 0m12.115s 00:09:07.373 user 0m11.506s 00:09:07.373 sys 0m0.963s 00:09:07.373 11:19:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:07.373 11:19:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:07.373 ************************************ 00:09:07.373 END TEST skip_rpc_with_json 00:09:07.373 ************************************ 00:09:07.373 11:19:49 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:09:07.373 11:19:49 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:07.373 11:19:49 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:07.373 11:19:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.373 ************************************ 00:09:07.373 START TEST skip_rpc_with_delay 00:09:07.373 ************************************ 00:09:07.373 11:19:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:09:07.373 11:19:49 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:07.373 11:19:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:09:07.373 11:19:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:07.373 11:19:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:07.373 11:19:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:07.373 11:19:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:07.373 11:19:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:07.373 11:19:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:07.373 11:19:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:07.373 11:19:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:07.373 11:19:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:09:07.373 11:19:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:07.631 [2024-10-07 11:19:49.188072] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:09:07.631 [2024-10-07 11:19:49.188233] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:09:07.631 11:19:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:09:07.631 11:19:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:07.631 11:19:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:07.631 ************************************ 00:09:07.631 END TEST skip_rpc_with_delay 00:09:07.631 ************************************ 00:09:07.631 11:19:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:07.631 00:09:07.631 real 0m0.200s 00:09:07.631 user 0m0.104s 00:09:07.631 sys 0m0.093s 00:09:07.631 11:19:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:07.631 11:19:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:09:07.631 11:19:49 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:09:07.631 11:19:49 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:09:07.631 11:19:49 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:09:07.631 11:19:49 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:07.631 11:19:49 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:07.631 11:19:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.631 ************************************ 00:09:07.631 START TEST exit_on_failed_rpc_init 00:09:07.631 ************************************ 00:09:07.631 11:19:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:09:07.631 11:19:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58796 00:09:07.631 11:19:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:07.631 11:19:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 58796 00:09:07.631 11:19:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 58796 ']' 00:09:07.631 11:19:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:07.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:07.631 11:19:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:07.631 11:19:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:07.631 11:19:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:07.631 11:19:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:09:07.888 [2024-10-07 11:19:49.448587] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:09:07.888 [2024-10-07 11:19:49.448948] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58796 ] 00:09:08.146 [2024-10-07 11:19:49.623458] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.404 [2024-10-07 11:19:49.862777] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.343 11:19:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:09.343 11:19:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:09:09.343 11:19:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:09.343 11:19:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:09.343 11:19:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:09:09.343 11:19:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:09.343 11:19:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:09.343 11:19:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:09.343 11:19:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:09.343 11:19:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:09.343 11:19:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:09.343 11:19:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:09.343 11:19:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:09.343 11:19:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:09:09.343 11:19:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:09.343 [2024-10-07 11:19:50.991608] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:09:09.343 [2024-10-07 11:19:50.991969] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58819 ] 00:09:09.601 [2024-10-07 11:19:51.169697] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.858 [2024-10-07 11:19:51.400160] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:09:09.858 [2024-10-07 11:19:51.400290] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:09:09.858 [2024-10-07 11:19:51.400308] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:09:09.858 [2024-10-07 11:19:51.400328] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:10.427 11:19:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:09:10.427 11:19:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:10.427 11:19:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:09:10.427 11:19:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:09:10.427 11:19:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:09:10.427 11:19:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:10.427 11:19:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:10.427 11:19:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 58796 00:09:10.427 11:19:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 58796 ']' 00:09:10.427 11:19:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 58796 00:09:10.427 11:19:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:09:10.427 11:19:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:10.427 11:19:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58796 00:09:10.427 killing process with pid 58796 00:09:10.427 11:19:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:10.427 11:19:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:10.427 11:19:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58796' 00:09:10.427 11:19:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 58796 00:09:10.427 11:19:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 58796 00:09:13.711 ************************************ 00:09:13.711 END TEST exit_on_failed_rpc_init 00:09:13.711 ************************************ 00:09:13.711 00:09:13.711 real 0m5.448s 00:09:13.711 user 0m5.968s 00:09:13.711 sys 0m0.747s 00:09:13.711 11:19:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:13.711 11:19:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:09:13.711 11:19:54 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:13.711 00:09:13.711 real 0m26.027s 00:09:13.711 user 0m24.998s 00:09:13.711 sys 0m2.562s 00:09:13.711 11:19:54 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:13.711 ************************************ 00:09:13.711 END TEST skip_rpc 00:09:13.711 ************************************ 00:09:13.711 11:19:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:13.711 11:19:54 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:09:13.711 11:19:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:13.711 11:19:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:13.711 11:19:54 -- common/autotest_common.sh@10 -- # set +x 00:09:13.711 ************************************ 00:09:13.711 START TEST rpc_client 00:09:13.711 ************************************ 00:09:13.711 11:19:54 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:09:13.711 * Looking for test storage... 00:09:13.711 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:09:13.711 11:19:55 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:13.711 11:19:55 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:09:13.711 11:19:55 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:13.711 11:19:55 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:13.711 11:19:55 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:13.711 11:19:55 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:13.711 11:19:55 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:13.711 11:19:55 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:09:13.711 11:19:55 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:09:13.711 11:19:55 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:09:13.711 11:19:55 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:09:13.711 11:19:55 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:09:13.711 11:19:55 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:09:13.711 11:19:55 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:09:13.711 11:19:55 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:13.711 11:19:55 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:09:13.711 11:19:55 rpc_client -- scripts/common.sh@345 -- # : 1 00:09:13.711 11:19:55 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:13.711 11:19:55 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:13.711 11:19:55 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:09:13.711 11:19:55 rpc_client -- scripts/common.sh@353 -- # local d=1 00:09:13.711 11:19:55 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:13.711 11:19:55 rpc_client -- scripts/common.sh@355 -- # echo 1 00:09:13.711 11:19:55 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:09:13.711 11:19:55 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:09:13.711 11:19:55 rpc_client -- scripts/common.sh@353 -- # local d=2 00:09:13.711 11:19:55 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:13.711 11:19:55 rpc_client -- scripts/common.sh@355 -- # echo 2 00:09:13.711 11:19:55 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:09:13.711 11:19:55 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:13.711 11:19:55 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:13.711 11:19:55 rpc_client -- scripts/common.sh@368 -- # return 0 00:09:13.711 11:19:55 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:13.711 11:19:55 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:13.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.711 --rc genhtml_branch_coverage=1 00:09:13.711 --rc genhtml_function_coverage=1 00:09:13.711 --rc genhtml_legend=1 00:09:13.711 --rc geninfo_all_blocks=1 00:09:13.711 --rc geninfo_unexecuted_blocks=1 00:09:13.711 00:09:13.711 ' 00:09:13.711 11:19:55 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:13.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.711 --rc genhtml_branch_coverage=1 00:09:13.711 --rc genhtml_function_coverage=1 00:09:13.711 --rc genhtml_legend=1 00:09:13.711 --rc geninfo_all_blocks=1 00:09:13.711 --rc geninfo_unexecuted_blocks=1 00:09:13.711 00:09:13.711 ' 00:09:13.711 11:19:55 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:13.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.711 --rc genhtml_branch_coverage=1 00:09:13.711 --rc genhtml_function_coverage=1 00:09:13.711 --rc genhtml_legend=1 00:09:13.711 --rc geninfo_all_blocks=1 00:09:13.711 --rc geninfo_unexecuted_blocks=1 00:09:13.711 00:09:13.711 ' 00:09:13.711 11:19:55 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:13.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.711 --rc genhtml_branch_coverage=1 00:09:13.711 --rc genhtml_function_coverage=1 00:09:13.711 --rc genhtml_legend=1 00:09:13.711 --rc geninfo_all_blocks=1 00:09:13.711 --rc geninfo_unexecuted_blocks=1 00:09:13.711 00:09:13.711 ' 00:09:13.711 11:19:55 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:09:13.711 OK 00:09:13.711 11:19:55 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:09:13.711 00:09:13.711 real 0m0.347s 00:09:13.711 user 0m0.206s 00:09:13.711 sys 0m0.153s 00:09:13.711 11:19:55 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:13.711 11:19:55 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:09:13.711 ************************************ 00:09:13.711 END TEST rpc_client 00:09:13.711 ************************************ 00:09:13.711 11:19:55 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:09:13.711 11:19:55 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:13.711 11:19:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:13.711 11:19:55 -- common/autotest_common.sh@10 -- # set +x 00:09:13.711 ************************************ 00:09:13.711 START TEST json_config 00:09:13.711 ************************************ 00:09:13.711 11:19:55 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:09:13.711 11:19:55 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:13.970 11:19:55 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:13.970 11:19:55 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:09:13.970 11:19:55 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:13.970 11:19:55 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:13.970 11:19:55 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:13.970 11:19:55 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:13.970 11:19:55 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:09:13.970 11:19:55 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:09:13.970 11:19:55 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:09:13.970 11:19:55 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:09:13.970 11:19:55 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:09:13.970 11:19:55 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:09:13.970 11:19:55 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:09:13.970 11:19:55 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:13.970 11:19:55 json_config -- scripts/common.sh@344 -- # case "$op" in 00:09:13.970 11:19:55 json_config -- scripts/common.sh@345 -- # : 1 00:09:13.970 11:19:55 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:13.970 11:19:55 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:13.970 11:19:55 json_config -- scripts/common.sh@365 -- # decimal 1 00:09:13.970 11:19:55 json_config -- scripts/common.sh@353 -- # local d=1 00:09:13.970 11:19:55 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:13.970 11:19:55 json_config -- scripts/common.sh@355 -- # echo 1 00:09:13.970 11:19:55 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:09:13.970 11:19:55 json_config -- scripts/common.sh@366 -- # decimal 2 00:09:13.970 11:19:55 json_config -- scripts/common.sh@353 -- # local d=2 00:09:13.970 11:19:55 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:13.970 11:19:55 json_config -- scripts/common.sh@355 -- # echo 2 00:09:13.970 11:19:55 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:09:13.970 11:19:55 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:13.970 11:19:55 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:13.970 11:19:55 json_config -- scripts/common.sh@368 -- # return 0 00:09:13.970 11:19:55 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:13.970 11:19:55 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:13.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.970 --rc genhtml_branch_coverage=1 00:09:13.970 --rc genhtml_function_coverage=1 00:09:13.970 --rc genhtml_legend=1 00:09:13.970 --rc geninfo_all_blocks=1 00:09:13.970 --rc geninfo_unexecuted_blocks=1 00:09:13.970 00:09:13.970 ' 00:09:13.970 11:19:55 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:13.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.970 --rc genhtml_branch_coverage=1 00:09:13.970 --rc genhtml_function_coverage=1 00:09:13.970 --rc genhtml_legend=1 00:09:13.970 --rc geninfo_all_blocks=1 00:09:13.970 --rc geninfo_unexecuted_blocks=1 00:09:13.970 00:09:13.970 ' 00:09:13.970 11:19:55 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:13.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.970 --rc genhtml_branch_coverage=1 00:09:13.970 --rc genhtml_function_coverage=1 00:09:13.970 --rc genhtml_legend=1 00:09:13.970 --rc geninfo_all_blocks=1 00:09:13.970 --rc geninfo_unexecuted_blocks=1 00:09:13.970 00:09:13.970 ' 00:09:13.970 11:19:55 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:13.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.970 --rc genhtml_branch_coverage=1 00:09:13.970 --rc genhtml_function_coverage=1 00:09:13.970 --rc genhtml_legend=1 00:09:13.970 --rc geninfo_all_blocks=1 00:09:13.970 --rc geninfo_unexecuted_blocks=1 00:09:13.970 00:09:13.971 ' 00:09:13.971 11:19:55 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:13.971 11:19:55 json_config -- nvmf/common.sh@7 -- # uname -s 00:09:13.971 11:19:55 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:13.971 11:19:55 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:13.971 11:19:55 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:13.971 11:19:55 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:13.971 11:19:55 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:13.971 11:19:55 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:13.971 11:19:55 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:13.971 11:19:55 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:13.971 11:19:55 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:13.971 11:19:55 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:13.971 11:19:55 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8e2af7c2-404d-449a-a884-b26bc0fd0f09 00:09:13.971 11:19:55 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=8e2af7c2-404d-449a-a884-b26bc0fd0f09 00:09:13.971 11:19:55 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:13.971 11:19:55 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:13.971 11:19:55 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:13.971 11:19:55 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:13.971 11:19:55 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:13.971 11:19:55 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:09:13.971 11:19:55 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:13.971 11:19:55 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:13.971 11:19:55 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:13.971 11:19:55 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.971 11:19:55 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.971 11:19:55 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.971 11:19:55 json_config -- paths/export.sh@5 -- # export PATH 00:09:13.971 11:19:55 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.971 11:19:55 json_config -- nvmf/common.sh@51 -- # : 0 00:09:13.971 11:19:55 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:13.971 11:19:55 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:13.971 11:19:55 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:13.971 11:19:55 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:13.971 11:19:55 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:13.971 11:19:55 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:13.971 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:13.971 11:19:55 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:13.971 11:19:55 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:13.971 11:19:55 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:13.971 11:19:55 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:09:13.971 11:19:55 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:09:13.971 11:19:55 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:09:13.971 11:19:55 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:09:13.971 11:19:55 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:09:13.971 11:19:55 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:09:13.971 WARNING: No tests are enabled so not running JSON configuration tests 00:09:13.971 11:19:55 json_config -- json_config/json_config.sh@28 -- # exit 0 00:09:13.971 00:09:13.971 real 0m0.223s 00:09:13.971 user 0m0.135s 00:09:13.971 sys 0m0.087s 00:09:13.971 11:19:55 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:13.971 11:19:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:13.971 ************************************ 00:09:13.971 END TEST json_config 00:09:13.971 ************************************ 00:09:13.971 11:19:55 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:09:13.971 11:19:55 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:13.971 11:19:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:13.971 11:19:55 -- common/autotest_common.sh@10 -- # set +x 00:09:13.971 ************************************ 00:09:13.971 START TEST json_config_extra_key 00:09:13.971 ************************************ 00:09:13.971 11:19:55 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:09:14.303 11:19:55 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:14.303 11:19:55 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:09:14.303 11:19:55 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:14.303 11:19:55 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:14.303 11:19:55 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:14.303 11:19:55 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:14.303 11:19:55 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:14.303 11:19:55 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:09:14.303 11:19:55 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:09:14.303 11:19:55 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:09:14.303 11:19:55 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:09:14.303 11:19:55 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:09:14.303 11:19:55 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:09:14.303 11:19:55 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:09:14.303 11:19:55 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:14.303 11:19:55 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:09:14.303 11:19:55 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:09:14.303 11:19:55 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:14.303 11:19:55 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:14.303 11:19:55 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:09:14.303 11:19:55 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:09:14.303 11:19:55 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:14.303 11:19:55 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:09:14.303 11:19:55 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:09:14.303 11:19:55 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:09:14.303 11:19:55 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:09:14.303 11:19:55 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:14.304 11:19:55 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:09:14.304 11:19:55 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:09:14.304 11:19:55 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:14.304 11:19:55 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:14.304 11:19:55 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:09:14.304 11:19:55 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:14.304 11:19:55 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:14.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.304 --rc genhtml_branch_coverage=1 00:09:14.304 --rc genhtml_function_coverage=1 00:09:14.304 --rc genhtml_legend=1 00:09:14.304 --rc geninfo_all_blocks=1 00:09:14.304 --rc geninfo_unexecuted_blocks=1 00:09:14.304 00:09:14.304 ' 00:09:14.304 11:19:55 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:14.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.304 --rc genhtml_branch_coverage=1 00:09:14.304 --rc genhtml_function_coverage=1 00:09:14.304 --rc genhtml_legend=1 00:09:14.304 --rc geninfo_all_blocks=1 00:09:14.304 --rc geninfo_unexecuted_blocks=1 00:09:14.304 00:09:14.304 ' 00:09:14.304 11:19:55 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:14.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.304 --rc genhtml_branch_coverage=1 00:09:14.304 --rc genhtml_function_coverage=1 00:09:14.304 --rc genhtml_legend=1 00:09:14.304 --rc geninfo_all_blocks=1 00:09:14.304 --rc geninfo_unexecuted_blocks=1 00:09:14.304 00:09:14.304 ' 00:09:14.304 11:19:55 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:14.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.304 --rc genhtml_branch_coverage=1 00:09:14.304 --rc genhtml_function_coverage=1 00:09:14.304 --rc genhtml_legend=1 00:09:14.304 --rc geninfo_all_blocks=1 00:09:14.304 --rc geninfo_unexecuted_blocks=1 00:09:14.304 00:09:14.304 ' 00:09:14.304 11:19:55 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:14.304 11:19:55 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:09:14.304 11:19:55 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:14.304 11:19:55 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:14.304 11:19:55 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:14.304 11:19:55 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:14.304 11:19:55 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:14.304 11:19:55 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:14.304 11:19:55 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:14.304 11:19:55 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:14.304 11:19:55 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:14.304 11:19:55 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:14.304 11:19:55 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8e2af7c2-404d-449a-a884-b26bc0fd0f09 00:09:14.304 11:19:55 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=8e2af7c2-404d-449a-a884-b26bc0fd0f09 00:09:14.304 11:19:55 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:14.304 11:19:55 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:14.304 11:19:55 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:14.304 11:19:55 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:14.304 11:19:55 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:14.304 11:19:55 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:09:14.304 11:19:55 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:14.304 11:19:55 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:14.304 11:19:55 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:14.304 11:19:55 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.304 11:19:55 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.304 11:19:55 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.304 11:19:55 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:09:14.304 11:19:55 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.304 11:19:55 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:09:14.304 11:19:55 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:14.304 11:19:55 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:14.304 11:19:55 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:14.304 11:19:55 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:14.304 11:19:55 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:14.304 11:19:55 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:14.304 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:14.304 11:19:55 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:14.304 11:19:55 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:14.304 11:19:55 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:14.304 11:19:55 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:09:14.304 11:19:55 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:09:14.304 11:19:55 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:09:14.304 11:19:55 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:09:14.304 11:19:55 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:09:14.304 11:19:55 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:09:14.304 11:19:55 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:09:14.304 11:19:55 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:09:14.304 11:19:55 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:09:14.304 11:19:55 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:09:14.304 11:19:55 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:09:14.304 INFO: launching applications... 00:09:14.304 11:19:55 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:09:14.304 11:19:55 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:09:14.304 11:19:55 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:09:14.304 11:19:55 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:14.304 11:19:55 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:14.304 11:19:55 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:09:14.304 11:19:55 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:14.304 11:19:55 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:14.304 11:19:55 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=59035 00:09:14.304 Waiting for target to run... 00:09:14.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:14.304 11:19:55 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:14.304 11:19:55 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 59035 /var/tmp/spdk_tgt.sock 00:09:14.304 11:19:55 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 59035 ']' 00:09:14.304 11:19:55 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:14.304 11:19:55 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:14.304 11:19:55 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:14.304 11:19:55 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:14.304 11:19:55 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:09:14.304 11:19:55 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:09:14.562 [2024-10-07 11:19:56.027856] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:09:14.562 [2024-10-07 11:19:56.028003] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59035 ] 00:09:14.821 [2024-10-07 11:19:56.434591] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.080 [2024-10-07 11:19:56.654830] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.015 00:09:16.015 INFO: shutting down applications... 00:09:16.015 11:19:57 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:16.015 11:19:57 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:09:16.015 11:19:57 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:09:16.015 11:19:57 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:09:16.015 11:19:57 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:09:16.015 11:19:57 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:09:16.015 11:19:57 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:09:16.015 11:19:57 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 59035 ]] 00:09:16.015 11:19:57 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 59035 00:09:16.015 11:19:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:09:16.015 11:19:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:16.015 11:19:57 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59035 00:09:16.015 11:19:57 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:16.274 11:19:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:16.274 11:19:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:16.274 11:19:57 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59035 00:09:16.274 11:19:57 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:16.845 11:19:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:16.845 11:19:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:16.845 11:19:58 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59035 00:09:16.845 11:19:58 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:17.425 11:19:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:17.425 11:19:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:17.425 11:19:58 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59035 00:09:17.425 11:19:58 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:17.992 11:19:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:17.992 11:19:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:17.992 11:19:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59035 00:09:17.992 11:19:59 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:18.558 11:19:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:18.558 11:19:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:18.558 11:19:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59035 00:09:18.558 11:19:59 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:18.815 11:20:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:18.815 11:20:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:18.815 11:20:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59035 00:09:18.815 11:20:00 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:19.383 11:20:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:19.383 11:20:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:19.383 11:20:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59035 00:09:19.383 11:20:00 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:09:19.383 11:20:00 json_config_extra_key -- json_config/common.sh@43 -- # break 00:09:19.383 11:20:00 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:09:19.383 SPDK target shutdown done 00:09:19.383 Success 00:09:19.383 11:20:01 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:09:19.383 11:20:01 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:09:19.383 00:09:19.383 real 0m5.371s 00:09:19.383 user 0m4.907s 00:09:19.383 sys 0m0.668s 00:09:19.383 ************************************ 00:09:19.383 END TEST json_config_extra_key 00:09:19.383 ************************************ 00:09:19.383 11:20:01 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:19.383 11:20:01 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:09:19.383 11:20:01 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:19.383 11:20:01 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:19.383 11:20:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:19.383 11:20:01 -- common/autotest_common.sh@10 -- # set +x 00:09:19.383 ************************************ 00:09:19.383 START TEST alias_rpc 00:09:19.383 ************************************ 00:09:19.383 11:20:01 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:19.642 * Looking for test storage... 00:09:19.642 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:09:19.642 11:20:01 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:19.642 11:20:01 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:09:19.642 11:20:01 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:19.642 11:20:01 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:19.642 11:20:01 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:19.642 11:20:01 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:19.642 11:20:01 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:19.642 11:20:01 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:19.642 11:20:01 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:19.642 11:20:01 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:19.642 11:20:01 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:19.642 11:20:01 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:19.642 11:20:01 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:19.642 11:20:01 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:19.642 11:20:01 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:19.642 11:20:01 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:19.642 11:20:01 alias_rpc -- scripts/common.sh@345 -- # : 1 00:09:19.642 11:20:01 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:19.642 11:20:01 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:19.642 11:20:01 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:19.642 11:20:01 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:09:19.642 11:20:01 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:19.642 11:20:01 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:09:19.642 11:20:01 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:19.642 11:20:01 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:19.642 11:20:01 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:09:19.642 11:20:01 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:19.642 11:20:01 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:09:19.642 11:20:01 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:19.642 11:20:01 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:19.642 11:20:01 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:19.642 11:20:01 alias_rpc -- scripts/common.sh@368 -- # return 0 00:09:19.642 11:20:01 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:19.642 11:20:01 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:19.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.642 --rc genhtml_branch_coverage=1 00:09:19.642 --rc genhtml_function_coverage=1 00:09:19.642 --rc genhtml_legend=1 00:09:19.642 --rc geninfo_all_blocks=1 00:09:19.642 --rc geninfo_unexecuted_blocks=1 00:09:19.642 00:09:19.642 ' 00:09:19.643 11:20:01 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:19.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.643 --rc genhtml_branch_coverage=1 00:09:19.643 --rc genhtml_function_coverage=1 00:09:19.643 --rc genhtml_legend=1 00:09:19.643 --rc geninfo_all_blocks=1 00:09:19.643 --rc geninfo_unexecuted_blocks=1 00:09:19.643 00:09:19.643 ' 00:09:19.643 11:20:01 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:19.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.643 --rc genhtml_branch_coverage=1 00:09:19.643 --rc genhtml_function_coverage=1 00:09:19.643 --rc genhtml_legend=1 00:09:19.643 --rc geninfo_all_blocks=1 00:09:19.643 --rc geninfo_unexecuted_blocks=1 00:09:19.643 00:09:19.643 ' 00:09:19.643 11:20:01 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:19.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.643 --rc genhtml_branch_coverage=1 00:09:19.643 --rc genhtml_function_coverage=1 00:09:19.643 --rc genhtml_legend=1 00:09:19.643 --rc geninfo_all_blocks=1 00:09:19.643 --rc geninfo_unexecuted_blocks=1 00:09:19.643 00:09:19.643 ' 00:09:19.643 11:20:01 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:19.643 11:20:01 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59159 00:09:19.643 11:20:01 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:19.643 11:20:01 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59159 00:09:19.643 11:20:01 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 59159 ']' 00:09:19.643 11:20:01 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:19.643 11:20:01 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:19.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:19.643 11:20:01 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:19.643 11:20:01 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:19.643 11:20:01 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.902 [2024-10-07 11:20:01.448666] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:09:19.902 [2024-10-07 11:20:01.448813] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59159 ] 00:09:20.161 [2024-10-07 11:20:01.617811] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.420 [2024-10-07 11:20:01.905275] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.355 11:20:02 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:21.355 11:20:02 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:09:21.355 11:20:02 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:09:21.612 11:20:03 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59159 00:09:21.612 11:20:03 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 59159 ']' 00:09:21.612 11:20:03 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 59159 00:09:21.612 11:20:03 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:09:21.612 11:20:03 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:21.612 11:20:03 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59159 00:09:21.612 11:20:03 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:21.612 killing process with pid 59159 00:09:21.612 11:20:03 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:21.612 11:20:03 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59159' 00:09:21.612 11:20:03 alias_rpc -- common/autotest_common.sh@969 -- # kill 59159 00:09:21.612 11:20:03 alias_rpc -- common/autotest_common.sh@974 -- # wait 59159 00:09:24.944 00:09:24.944 real 0m5.078s 00:09:24.944 user 0m4.903s 00:09:24.944 sys 0m0.884s 00:09:24.944 11:20:06 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:24.944 ************************************ 00:09:24.944 END TEST alias_rpc 00:09:24.944 11:20:06 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.944 ************************************ 00:09:24.944 11:20:06 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:09:24.944 11:20:06 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:09:24.944 11:20:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:24.944 11:20:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:24.944 11:20:06 -- common/autotest_common.sh@10 -- # set +x 00:09:24.944 ************************************ 00:09:24.944 START TEST spdkcli_tcp 00:09:24.944 ************************************ 00:09:24.944 11:20:06 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:09:24.944 * Looking for test storage... 00:09:24.944 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:09:24.944 11:20:06 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:24.944 11:20:06 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:09:24.944 11:20:06 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:24.944 11:20:06 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:24.944 11:20:06 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:24.944 11:20:06 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:24.944 11:20:06 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:24.944 11:20:06 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:09:24.944 11:20:06 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:09:24.944 11:20:06 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:09:24.944 11:20:06 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:09:24.944 11:20:06 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:09:24.944 11:20:06 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:09:24.944 11:20:06 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:09:24.944 11:20:06 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:24.944 11:20:06 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:09:24.944 11:20:06 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:09:24.944 11:20:06 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:24.944 11:20:06 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:24.944 11:20:06 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:09:24.944 11:20:06 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:09:24.944 11:20:06 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:24.944 11:20:06 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:09:24.944 11:20:06 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:24.944 11:20:06 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:09:24.944 11:20:06 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:09:24.944 11:20:06 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:24.944 11:20:06 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:09:24.944 11:20:06 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:24.944 11:20:06 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:24.944 11:20:06 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:24.944 11:20:06 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:09:24.944 11:20:06 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:24.944 11:20:06 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:24.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.944 --rc genhtml_branch_coverage=1 00:09:24.944 --rc genhtml_function_coverage=1 00:09:24.944 --rc genhtml_legend=1 00:09:24.944 --rc geninfo_all_blocks=1 00:09:24.944 --rc geninfo_unexecuted_blocks=1 00:09:24.944 00:09:24.944 ' 00:09:24.944 11:20:06 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:24.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.944 --rc genhtml_branch_coverage=1 00:09:24.944 --rc genhtml_function_coverage=1 00:09:24.944 --rc genhtml_legend=1 00:09:24.945 --rc geninfo_all_blocks=1 00:09:24.945 --rc geninfo_unexecuted_blocks=1 00:09:24.945 00:09:24.945 ' 00:09:24.945 11:20:06 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:24.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.945 --rc genhtml_branch_coverage=1 00:09:24.945 --rc genhtml_function_coverage=1 00:09:24.945 --rc genhtml_legend=1 00:09:24.945 --rc geninfo_all_blocks=1 00:09:24.945 --rc geninfo_unexecuted_blocks=1 00:09:24.945 00:09:24.945 ' 00:09:24.945 11:20:06 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:24.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.945 --rc genhtml_branch_coverage=1 00:09:24.945 --rc genhtml_function_coverage=1 00:09:24.945 --rc genhtml_legend=1 00:09:24.945 --rc geninfo_all_blocks=1 00:09:24.945 --rc geninfo_unexecuted_blocks=1 00:09:24.945 00:09:24.945 ' 00:09:24.945 11:20:06 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:09:24.945 11:20:06 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:09:24.945 11:20:06 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:09:24.945 11:20:06 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:09:24.945 11:20:06 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:09:24.945 11:20:06 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:09:24.945 11:20:06 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:09:24.945 11:20:06 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:24.945 11:20:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:24.945 11:20:06 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59277 00:09:24.945 11:20:06 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 59277 00:09:24.945 11:20:06 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 59277 ']' 00:09:24.945 11:20:06 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:09:24.945 11:20:06 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.945 11:20:06 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:24.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.945 11:20:06 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.945 11:20:06 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:24.945 11:20:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:24.945 [2024-10-07 11:20:06.583663] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:09:24.945 [2024-10-07 11:20:06.583836] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59277 ] 00:09:25.225 [2024-10-07 11:20:06.751879] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:25.482 [2024-10-07 11:20:06.985855] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.482 [2024-10-07 11:20:06.985875] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:09:26.416 11:20:07 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:26.416 11:20:07 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:09:26.416 11:20:07 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=59294 00:09:26.416 11:20:07 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:09:26.416 11:20:07 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:09:26.675 [ 00:09:26.675 "bdev_malloc_delete", 00:09:26.675 "bdev_malloc_create", 00:09:26.675 "bdev_null_resize", 00:09:26.675 "bdev_null_delete", 00:09:26.675 "bdev_null_create", 00:09:26.675 "bdev_nvme_cuse_unregister", 00:09:26.675 "bdev_nvme_cuse_register", 00:09:26.675 "bdev_opal_new_user", 00:09:26.675 "bdev_opal_set_lock_state", 00:09:26.675 "bdev_opal_delete", 00:09:26.675 "bdev_opal_get_info", 00:09:26.675 "bdev_opal_create", 00:09:26.675 "bdev_nvme_opal_revert", 00:09:26.675 "bdev_nvme_opal_init", 00:09:26.675 "bdev_nvme_send_cmd", 00:09:26.675 "bdev_nvme_set_keys", 00:09:26.675 "bdev_nvme_get_path_iostat", 00:09:26.675 "bdev_nvme_get_mdns_discovery_info", 00:09:26.675 "bdev_nvme_stop_mdns_discovery", 00:09:26.675 "bdev_nvme_start_mdns_discovery", 00:09:26.675 "bdev_nvme_set_multipath_policy", 00:09:26.675 "bdev_nvme_set_preferred_path", 00:09:26.675 "bdev_nvme_get_io_paths", 00:09:26.675 "bdev_nvme_remove_error_injection", 00:09:26.675 "bdev_nvme_add_error_injection", 00:09:26.675 "bdev_nvme_get_discovery_info", 00:09:26.675 "bdev_nvme_stop_discovery", 00:09:26.675 "bdev_nvme_start_discovery", 00:09:26.675 "bdev_nvme_get_controller_health_info", 00:09:26.675 "bdev_nvme_disable_controller", 00:09:26.675 "bdev_nvme_enable_controller", 00:09:26.675 "bdev_nvme_reset_controller", 00:09:26.675 "bdev_nvme_get_transport_statistics", 00:09:26.675 "bdev_nvme_apply_firmware", 00:09:26.675 "bdev_nvme_detach_controller", 00:09:26.675 "bdev_nvme_get_controllers", 00:09:26.675 "bdev_nvme_attach_controller", 00:09:26.675 "bdev_nvme_set_hotplug", 00:09:26.675 "bdev_nvme_set_options", 00:09:26.675 "bdev_passthru_delete", 00:09:26.675 "bdev_passthru_create", 00:09:26.675 "bdev_lvol_set_parent_bdev", 00:09:26.675 "bdev_lvol_set_parent", 00:09:26.675 "bdev_lvol_check_shallow_copy", 00:09:26.675 "bdev_lvol_start_shallow_copy", 00:09:26.675 "bdev_lvol_grow_lvstore", 00:09:26.675 "bdev_lvol_get_lvols", 00:09:26.675 "bdev_lvol_get_lvstores", 00:09:26.675 "bdev_lvol_delete", 00:09:26.675 "bdev_lvol_set_read_only", 00:09:26.675 "bdev_lvol_resize", 00:09:26.675 "bdev_lvol_decouple_parent", 00:09:26.675 "bdev_lvol_inflate", 00:09:26.675 "bdev_lvol_rename", 00:09:26.675 "bdev_lvol_clone_bdev", 00:09:26.675 "bdev_lvol_clone", 00:09:26.675 "bdev_lvol_snapshot", 00:09:26.675 "bdev_lvol_create", 00:09:26.675 "bdev_lvol_delete_lvstore", 00:09:26.675 "bdev_lvol_rename_lvstore", 00:09:26.675 "bdev_lvol_create_lvstore", 00:09:26.675 "bdev_raid_set_options", 00:09:26.675 "bdev_raid_remove_base_bdev", 00:09:26.675 "bdev_raid_add_base_bdev", 00:09:26.675 "bdev_raid_delete", 00:09:26.675 "bdev_raid_create", 00:09:26.675 "bdev_raid_get_bdevs", 00:09:26.675 "bdev_error_inject_error", 00:09:26.675 "bdev_error_delete", 00:09:26.675 "bdev_error_create", 00:09:26.675 "bdev_split_delete", 00:09:26.675 "bdev_split_create", 00:09:26.675 "bdev_delay_delete", 00:09:26.675 "bdev_delay_create", 00:09:26.675 "bdev_delay_update_latency", 00:09:26.675 "bdev_zone_block_delete", 00:09:26.675 "bdev_zone_block_create", 00:09:26.675 "blobfs_create", 00:09:26.675 "blobfs_detect", 00:09:26.675 "blobfs_set_cache_size", 00:09:26.675 "bdev_xnvme_delete", 00:09:26.675 "bdev_xnvme_create", 00:09:26.675 "bdev_aio_delete", 00:09:26.675 "bdev_aio_rescan", 00:09:26.675 "bdev_aio_create", 00:09:26.675 "bdev_ftl_set_property", 00:09:26.675 "bdev_ftl_get_properties", 00:09:26.675 "bdev_ftl_get_stats", 00:09:26.675 "bdev_ftl_unmap", 00:09:26.675 "bdev_ftl_unload", 00:09:26.675 "bdev_ftl_delete", 00:09:26.675 "bdev_ftl_load", 00:09:26.675 "bdev_ftl_create", 00:09:26.675 "bdev_virtio_attach_controller", 00:09:26.675 "bdev_virtio_scsi_get_devices", 00:09:26.675 "bdev_virtio_detach_controller", 00:09:26.675 "bdev_virtio_blk_set_hotplug", 00:09:26.675 "bdev_iscsi_delete", 00:09:26.675 "bdev_iscsi_create", 00:09:26.675 "bdev_iscsi_set_options", 00:09:26.675 "accel_error_inject_error", 00:09:26.675 "ioat_scan_accel_module", 00:09:26.675 "dsa_scan_accel_module", 00:09:26.675 "iaa_scan_accel_module", 00:09:26.675 "keyring_file_remove_key", 00:09:26.675 "keyring_file_add_key", 00:09:26.675 "keyring_linux_set_options", 00:09:26.675 "fsdev_aio_delete", 00:09:26.675 "fsdev_aio_create", 00:09:26.675 "iscsi_get_histogram", 00:09:26.675 "iscsi_enable_histogram", 00:09:26.675 "iscsi_set_options", 00:09:26.675 "iscsi_get_auth_groups", 00:09:26.675 "iscsi_auth_group_remove_secret", 00:09:26.675 "iscsi_auth_group_add_secret", 00:09:26.675 "iscsi_delete_auth_group", 00:09:26.675 "iscsi_create_auth_group", 00:09:26.675 "iscsi_set_discovery_auth", 00:09:26.675 "iscsi_get_options", 00:09:26.675 "iscsi_target_node_request_logout", 00:09:26.675 "iscsi_target_node_set_redirect", 00:09:26.675 "iscsi_target_node_set_auth", 00:09:26.675 "iscsi_target_node_add_lun", 00:09:26.675 "iscsi_get_stats", 00:09:26.675 "iscsi_get_connections", 00:09:26.675 "iscsi_portal_group_set_auth", 00:09:26.675 "iscsi_start_portal_group", 00:09:26.675 "iscsi_delete_portal_group", 00:09:26.675 "iscsi_create_portal_group", 00:09:26.675 "iscsi_get_portal_groups", 00:09:26.675 "iscsi_delete_target_node", 00:09:26.675 "iscsi_target_node_remove_pg_ig_maps", 00:09:26.675 "iscsi_target_node_add_pg_ig_maps", 00:09:26.675 "iscsi_create_target_node", 00:09:26.675 "iscsi_get_target_nodes", 00:09:26.675 "iscsi_delete_initiator_group", 00:09:26.675 "iscsi_initiator_group_remove_initiators", 00:09:26.675 "iscsi_initiator_group_add_initiators", 00:09:26.675 "iscsi_create_initiator_group", 00:09:26.675 "iscsi_get_initiator_groups", 00:09:26.675 "nvmf_set_crdt", 00:09:26.675 "nvmf_set_config", 00:09:26.675 "nvmf_set_max_subsystems", 00:09:26.675 "nvmf_stop_mdns_prr", 00:09:26.675 "nvmf_publish_mdns_prr", 00:09:26.675 "nvmf_subsystem_get_listeners", 00:09:26.675 "nvmf_subsystem_get_qpairs", 00:09:26.675 "nvmf_subsystem_get_controllers", 00:09:26.675 "nvmf_get_stats", 00:09:26.675 "nvmf_get_transports", 00:09:26.675 "nvmf_create_transport", 00:09:26.675 "nvmf_get_targets", 00:09:26.675 "nvmf_delete_target", 00:09:26.675 "nvmf_create_target", 00:09:26.675 "nvmf_subsystem_allow_any_host", 00:09:26.675 "nvmf_subsystem_set_keys", 00:09:26.675 "nvmf_subsystem_remove_host", 00:09:26.675 "nvmf_subsystem_add_host", 00:09:26.675 "nvmf_ns_remove_host", 00:09:26.675 "nvmf_ns_add_host", 00:09:26.675 "nvmf_subsystem_remove_ns", 00:09:26.675 "nvmf_subsystem_set_ns_ana_group", 00:09:26.675 "nvmf_subsystem_add_ns", 00:09:26.675 "nvmf_subsystem_listener_set_ana_state", 00:09:26.675 "nvmf_discovery_get_referrals", 00:09:26.675 "nvmf_discovery_remove_referral", 00:09:26.675 "nvmf_discovery_add_referral", 00:09:26.675 "nvmf_subsystem_remove_listener", 00:09:26.675 "nvmf_subsystem_add_listener", 00:09:26.675 "nvmf_delete_subsystem", 00:09:26.675 "nvmf_create_subsystem", 00:09:26.675 "nvmf_get_subsystems", 00:09:26.675 "env_dpdk_get_mem_stats", 00:09:26.675 "nbd_get_disks", 00:09:26.675 "nbd_stop_disk", 00:09:26.675 "nbd_start_disk", 00:09:26.675 "ublk_recover_disk", 00:09:26.675 "ublk_get_disks", 00:09:26.675 "ublk_stop_disk", 00:09:26.675 "ublk_start_disk", 00:09:26.675 "ublk_destroy_target", 00:09:26.675 "ublk_create_target", 00:09:26.675 "virtio_blk_create_transport", 00:09:26.675 "virtio_blk_get_transports", 00:09:26.675 "vhost_controller_set_coalescing", 00:09:26.675 "vhost_get_controllers", 00:09:26.675 "vhost_delete_controller", 00:09:26.675 "vhost_create_blk_controller", 00:09:26.675 "vhost_scsi_controller_remove_target", 00:09:26.675 "vhost_scsi_controller_add_target", 00:09:26.675 "vhost_start_scsi_controller", 00:09:26.675 "vhost_create_scsi_controller", 00:09:26.675 "thread_set_cpumask", 00:09:26.675 "scheduler_set_options", 00:09:26.675 "framework_get_governor", 00:09:26.675 "framework_get_scheduler", 00:09:26.675 "framework_set_scheduler", 00:09:26.675 "framework_get_reactors", 00:09:26.675 "thread_get_io_channels", 00:09:26.675 "thread_get_pollers", 00:09:26.675 "thread_get_stats", 00:09:26.675 "framework_monitor_context_switch", 00:09:26.675 "spdk_kill_instance", 00:09:26.675 "log_enable_timestamps", 00:09:26.675 "log_get_flags", 00:09:26.675 "log_clear_flag", 00:09:26.675 "log_set_flag", 00:09:26.675 "log_get_level", 00:09:26.675 "log_set_level", 00:09:26.675 "log_get_print_level", 00:09:26.675 "log_set_print_level", 00:09:26.675 "framework_enable_cpumask_locks", 00:09:26.675 "framework_disable_cpumask_locks", 00:09:26.675 "framework_wait_init", 00:09:26.675 "framework_start_init", 00:09:26.675 "scsi_get_devices", 00:09:26.675 "bdev_get_histogram", 00:09:26.675 "bdev_enable_histogram", 00:09:26.675 "bdev_set_qos_limit", 00:09:26.675 "bdev_set_qd_sampling_period", 00:09:26.675 "bdev_get_bdevs", 00:09:26.675 "bdev_reset_iostat", 00:09:26.675 "bdev_get_iostat", 00:09:26.675 "bdev_examine", 00:09:26.675 "bdev_wait_for_examine", 00:09:26.675 "bdev_set_options", 00:09:26.675 "accel_get_stats", 00:09:26.675 "accel_set_options", 00:09:26.675 "accel_set_driver", 00:09:26.675 "accel_crypto_key_destroy", 00:09:26.675 "accel_crypto_keys_get", 00:09:26.675 "accel_crypto_key_create", 00:09:26.676 "accel_assign_opc", 00:09:26.676 "accel_get_module_info", 00:09:26.676 "accel_get_opc_assignments", 00:09:26.676 "vmd_rescan", 00:09:26.676 "vmd_remove_device", 00:09:26.676 "vmd_enable", 00:09:26.676 "sock_get_default_impl", 00:09:26.676 "sock_set_default_impl", 00:09:26.676 "sock_impl_set_options", 00:09:26.676 "sock_impl_get_options", 00:09:26.676 "iobuf_get_stats", 00:09:26.676 "iobuf_set_options", 00:09:26.676 "keyring_get_keys", 00:09:26.676 "framework_get_pci_devices", 00:09:26.676 "framework_get_config", 00:09:26.676 "framework_get_subsystems", 00:09:26.676 "fsdev_set_opts", 00:09:26.676 "fsdev_get_opts", 00:09:26.676 "trace_get_info", 00:09:26.676 "trace_get_tpoint_group_mask", 00:09:26.676 "trace_disable_tpoint_group", 00:09:26.676 "trace_enable_tpoint_group", 00:09:26.676 "trace_clear_tpoint_mask", 00:09:26.676 "trace_set_tpoint_mask", 00:09:26.676 "notify_get_notifications", 00:09:26.676 "notify_get_types", 00:09:26.676 "spdk_get_version", 00:09:26.676 "rpc_get_methods" 00:09:26.676 ] 00:09:26.676 11:20:08 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:09:26.676 11:20:08 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:26.676 11:20:08 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:26.676 11:20:08 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:26.676 11:20:08 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 59277 00:09:26.676 11:20:08 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 59277 ']' 00:09:26.676 11:20:08 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 59277 00:09:26.676 11:20:08 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:09:26.676 11:20:08 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:26.676 11:20:08 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59277 00:09:26.676 11:20:08 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:26.676 11:20:08 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:26.676 11:20:08 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59277' 00:09:26.676 killing process with pid 59277 00:09:26.676 11:20:08 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 59277 00:09:26.676 11:20:08 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 59277 00:09:29.959 00:09:29.959 real 0m4.902s 00:09:29.959 user 0m8.638s 00:09:29.959 sys 0m0.742s 00:09:29.959 11:20:11 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:29.959 ************************************ 00:09:29.959 END TEST spdkcli_tcp 00:09:29.959 ************************************ 00:09:29.959 11:20:11 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:29.959 11:20:11 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:29.959 11:20:11 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:29.959 11:20:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:29.959 11:20:11 -- common/autotest_common.sh@10 -- # set +x 00:09:29.959 ************************************ 00:09:29.959 START TEST dpdk_mem_utility 00:09:29.959 ************************************ 00:09:29.959 11:20:11 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:29.959 * Looking for test storage... 00:09:29.959 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:09:29.959 11:20:11 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:29.959 11:20:11 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:09:29.959 11:20:11 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:29.959 11:20:11 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:29.959 11:20:11 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:29.959 11:20:11 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:29.959 11:20:11 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:29.959 11:20:11 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:09:29.959 11:20:11 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:09:29.959 11:20:11 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:09:29.959 11:20:11 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:09:29.959 11:20:11 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:09:29.959 11:20:11 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:09:29.959 11:20:11 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:09:29.959 11:20:11 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:29.959 11:20:11 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:09:29.959 11:20:11 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:09:29.959 11:20:11 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:29.959 11:20:11 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:29.959 11:20:11 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:09:29.959 11:20:11 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:09:29.959 11:20:11 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:29.959 11:20:11 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:09:29.959 11:20:11 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:09:29.959 11:20:11 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:09:29.959 11:20:11 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:09:29.959 11:20:11 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:29.959 11:20:11 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:09:29.959 11:20:11 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:09:29.959 11:20:11 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:29.959 11:20:11 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:29.959 11:20:11 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:09:29.959 11:20:11 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:29.959 11:20:11 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:29.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.959 --rc genhtml_branch_coverage=1 00:09:29.959 --rc genhtml_function_coverage=1 00:09:29.959 --rc genhtml_legend=1 00:09:29.959 --rc geninfo_all_blocks=1 00:09:29.959 --rc geninfo_unexecuted_blocks=1 00:09:29.959 00:09:29.959 ' 00:09:29.959 11:20:11 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:29.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.959 --rc genhtml_branch_coverage=1 00:09:29.959 --rc genhtml_function_coverage=1 00:09:29.959 --rc genhtml_legend=1 00:09:29.959 --rc geninfo_all_blocks=1 00:09:29.959 --rc geninfo_unexecuted_blocks=1 00:09:29.959 00:09:29.959 ' 00:09:29.959 11:20:11 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:29.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.959 --rc genhtml_branch_coverage=1 00:09:29.959 --rc genhtml_function_coverage=1 00:09:29.959 --rc genhtml_legend=1 00:09:29.959 --rc geninfo_all_blocks=1 00:09:29.959 --rc geninfo_unexecuted_blocks=1 00:09:29.959 00:09:29.959 ' 00:09:29.959 11:20:11 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:29.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.959 --rc genhtml_branch_coverage=1 00:09:29.959 --rc genhtml_function_coverage=1 00:09:29.959 --rc genhtml_legend=1 00:09:29.959 --rc geninfo_all_blocks=1 00:09:29.959 --rc geninfo_unexecuted_blocks=1 00:09:29.959 00:09:29.959 ' 00:09:29.959 11:20:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:09:29.959 11:20:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59410 00:09:29.959 11:20:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:29.959 11:20:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59410 00:09:29.959 11:20:11 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 59410 ']' 00:09:29.959 11:20:11 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:29.959 11:20:11 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:29.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:29.959 11:20:11 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:29.959 11:20:11 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:29.959 11:20:11 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:29.959 [2024-10-07 11:20:11.586157] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:09:29.959 [2024-10-07 11:20:11.586314] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59410 ] 00:09:30.218 [2024-10-07 11:20:11.763558] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.476 [2024-10-07 11:20:12.059201] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.417 11:20:13 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:31.417 11:20:13 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:09:31.417 11:20:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:09:31.417 11:20:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:09:31.417 11:20:13 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.417 11:20:13 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:31.417 { 00:09:31.417 "filename": "/tmp/spdk_mem_dump.txt" 00:09:31.417 } 00:09:31.417 11:20:13 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.417 11:20:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:09:31.417 DPDK memory size 866.000000 MiB in 1 heap(s) 00:09:31.417 1 heaps totaling size 866.000000 MiB 00:09:31.417 size: 866.000000 MiB heap id: 0 00:09:31.417 end heaps---------- 00:09:31.417 9 mempools totaling size 642.649841 MiB 00:09:31.417 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:09:31.417 size: 158.602051 MiB name: PDU_data_out_Pool 00:09:31.417 size: 92.545471 MiB name: bdev_io_59410 00:09:31.417 size: 51.011292 MiB name: evtpool_59410 00:09:31.417 size: 50.003479 MiB name: msgpool_59410 00:09:31.417 size: 36.509338 MiB name: fsdev_io_59410 00:09:31.417 size: 21.763794 MiB name: PDU_Pool 00:09:31.417 size: 19.513306 MiB name: SCSI_TASK_Pool 00:09:31.417 size: 0.026123 MiB name: Session_Pool 00:09:31.417 end mempools------- 00:09:31.417 6 memzones totaling size 4.142822 MiB 00:09:31.417 size: 1.000366 MiB name: RG_ring_0_59410 00:09:31.417 size: 1.000366 MiB name: RG_ring_1_59410 00:09:31.417 size: 1.000366 MiB name: RG_ring_4_59410 00:09:31.417 size: 1.000366 MiB name: RG_ring_5_59410 00:09:31.417 size: 0.125366 MiB name: RG_ring_2_59410 00:09:31.417 size: 0.015991 MiB name: RG_ring_3_59410 00:09:31.417 end memzones------- 00:09:31.417 11:20:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:09:31.678 heap id: 0 total size: 866.000000 MiB number of busy elements: 309 number of free elements: 19 00:09:31.678 list of free elements. size: 19.915039 MiB 00:09:31.678 element at address: 0x200000400000 with size: 1.999451 MiB 00:09:31.678 element at address: 0x200000800000 with size: 1.996887 MiB 00:09:31.678 element at address: 0x200009600000 with size: 1.995972 MiB 00:09:31.678 element at address: 0x20000d800000 with size: 1.995972 MiB 00:09:31.678 element at address: 0x200007000000 with size: 1.991028 MiB 00:09:31.678 element at address: 0x20001bf00040 with size: 0.999939 MiB 00:09:31.678 element at address: 0x20001c300040 with size: 0.999939 MiB 00:09:31.678 element at address: 0x20001c400000 with size: 0.999084 MiB 00:09:31.678 element at address: 0x200035000000 with size: 0.994324 MiB 00:09:31.678 element at address: 0x20001bc00000 with size: 0.959656 MiB 00:09:31.678 element at address: 0x20001c700040 with size: 0.936401 MiB 00:09:31.678 element at address: 0x200000200000 with size: 0.831909 MiB 00:09:31.678 element at address: 0x20001de00000 with size: 0.562195 MiB 00:09:31.678 element at address: 0x200003e00000 with size: 0.490662 MiB 00:09:31.678 element at address: 0x20001c000000 with size: 0.489197 MiB 00:09:31.678 element at address: 0x20001c800000 with size: 0.485413 MiB 00:09:31.678 element at address: 0x200015e00000 with size: 0.443481 MiB 00:09:31.678 element at address: 0x20002b200000 with size: 0.390442 MiB 00:09:31.678 element at address: 0x200003a00000 with size: 0.353088 MiB 00:09:31.678 list of standard malloc elements. size: 199.286255 MiB 00:09:31.678 element at address: 0x20000d9fef80 with size: 132.000183 MiB 00:09:31.678 element at address: 0x2000097fef80 with size: 64.000183 MiB 00:09:31.678 element at address: 0x20001bdfff80 with size: 1.000183 MiB 00:09:31.678 element at address: 0x20001c1fff80 with size: 1.000183 MiB 00:09:31.678 element at address: 0x20001c5fff80 with size: 1.000183 MiB 00:09:31.678 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:09:31.678 element at address: 0x20001c7eff40 with size: 0.062683 MiB 00:09:31.678 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:09:31.678 element at address: 0x20000d7ff040 with size: 0.000427 MiB 00:09:31.678 element at address: 0x20001c7efdc0 with size: 0.000366 MiB 00:09:31.678 element at address: 0x200015dff040 with size: 0.000305 MiB 00:09:31.678 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:09:31.678 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:09:31.678 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:09:31.678 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:09:31.678 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:09:31.678 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:09:31.678 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:09:31.678 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:09:31.678 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:09:31.678 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:09:31.678 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:09:31.678 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:09:31.678 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:09:31.678 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:09:31.679 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:09:31.679 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:09:31.679 element at address: 0x2000002d5f80 with size: 0.000244 MiB 00:09:31.679 element at address: 0x2000002d6200 with size: 0.000244 MiB 00:09:31.679 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:09:31.679 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:09:31.679 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:09:31.679 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:09:31.679 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:09:31.679 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:09:31.679 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:09:31.679 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:09:31.679 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:09:31.679 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:09:31.679 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:09:31.679 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:09:31.679 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:09:31.679 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:09:31.679 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:09:31.679 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:09:31.679 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:09:31.679 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:09:31.679 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:09:31.679 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:09:31.679 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:09:31.679 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:09:31.679 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:09:31.679 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:09:31.679 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:09:31.679 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:09:31.679 element at address: 0x200003a7eac0 with size: 0.000244 MiB 00:09:31.679 element at address: 0x200003a7ebc0 with size: 0.000244 MiB 00:09:31.679 element at address: 0x200003a7ecc0 with size: 0.000244 MiB 00:09:31.679 element at address: 0x200003a7edc0 with size: 0.000244 MiB 00:09:31.679 element at address: 0x200003a7eec0 with size: 0.000244 MiB 00:09:31.679 element at address: 0x200003a7efc0 with size: 0.000244 MiB 00:09:31.679 element at address: 0x200003a7f0c0 with size: 0.000244 MiB 00:09:31.679 element at address: 0x200003a7f1c0 with size: 0.000244 MiB 00:09:31.679 element at address: 0x200003a7f2c0 with size: 0.000244 MiB 00:09:31.679 element at address: 0x200003a7f3c0 with size: 0.000244 MiB 00:09:31.679 element at address: 0x200003a7f4c0 with size: 0.000244 MiB 00:09:31.679 element at address: 0x200003aff800 with size: 0.000244 MiB 00:09:31.679 element at address: 0x200003affa80 with size: 0.000244 MiB 00:09:31.679 element at address: 0x200003e7d9c0 with size: 0.000244 MiB 00:09:31.679 element at address: 0x200003e7dac0 with size: 0.000244 MiB 00:09:31.679 element at address: 0x200003e7dbc0 with size: 0.000244 MiB 00:09:31.679 element at address: 0x200003e7dcc0 with size: 0.000244 MiB 00:09:31.679 element at address: 0x200003e7ddc0 with size: 0.000244 MiB 00:09:31.679 element at address: 0x200003e7dec0 with size: 0.000244 MiB 00:09:31.679 element at address: 0x200003e7dfc0 with size: 0.000244 MiB 00:09:31.679 element at address: 0x200003e7e0c0 with size: 0.000244 MiB 00:09:31.679 element at address: 0x200003e7e1c0 with size: 0.000244 MiB 00:09:31.679 element at address: 0x200003e7e2c0 with size: 0.000244 MiB 00:09:31.679 element at address: 0x200003e7e3c0 with size: 0.000244 MiB 00:09:31.679 element at address: 0x200003e7e4c0 with size: 0.000244 MiB 00:09:31.679 element at address: 0x200003e7e5c0 with size: 0.000244 MiB 00:09:31.679 element at address: 0x200003e7e6c0 with size: 0.000244 MiB 00:09:31.679 element at address: 0x200003e7e7c0 with size: 0.000244 MiB 00:09:31.679 element at address: 0x200003e7e8c0 with size: 0.000244 MiB 00:09:31.679 element at address: 0x200003e7e9c0 with size: 0.000244 MiB 00:09:31.679 element at address: 0x200003e7eac0 with size: 0.000244 MiB 00:09:31.679 element at address: 0x200003e7ebc0 with size: 0.000244 MiB 00:09:31.679 element at address: 0x200003efef00 with size: 0.000244 MiB 00:09:31.679 element at address: 0x200003eff000 with size: 0.000244 MiB 00:09:31.679 element at address: 0x20000d7ff200 with size: 0.000244 MiB 00:09:31.679 element at address: 0x20000d7ff300 with size: 0.000244 MiB 00:09:31.679 element at address: 0x20000d7ff400 with size: 0.000244 MiB 00:09:31.679 element at address: 0x20000d7ff500 with size: 0.000244 MiB 00:09:31.679 element at address: 0x20000d7ff600 with size: 0.000244 MiB 00:09:31.679 element at address: 0x20000d7ff700 with size: 0.000244 MiB 00:09:31.679 element at address: 0x20000d7ff800 with size: 0.000244 MiB 00:09:31.679 element at address: 0x20000d7ff900 with size: 0.000244 MiB 00:09:31.679 element at address: 0x20000d7ffa00 with size: 0.000244 MiB 00:09:31.679 element at address: 0x20000d7ffb00 with size: 0.000244 MiB 00:09:31.679 element at address: 0x20000d7ffc00 with size: 0.000244 MiB 00:09:31.679 element at address: 0x20000d7ffd00 with size: 0.000244 MiB 00:09:31.679 element at address: 0x20000d7ffe00 with size: 0.000244 MiB 00:09:31.679 element at address: 0x20000d7fff00 with size: 0.000244 MiB 00:09:31.679 element at address: 0x200015dff180 with size: 0.000244 MiB 00:09:31.679 element at address: 0x200015dff280 with size: 0.000244 MiB 00:09:31.679 element at address: 0x200015dff380 with size: 0.000244 MiB 00:09:31.679 element at address: 0x200015dff480 with size: 0.000244 MiB 00:09:31.679 element at address: 0x200015dff580 with size: 0.000244 MiB 00:09:31.679 element at address: 0x200015dff680 with size: 0.000244 MiB 00:09:31.679 element at address: 0x200015dff780 with size: 0.000244 MiB 00:09:31.679 element at address: 0x200015dff880 with size: 0.000244 MiB 00:09:31.679 element at address: 0x200015dff980 with size: 0.000244 MiB 00:09:31.679 element at address: 0x200015dffa80 with size: 0.000244 MiB 00:09:31.679 element at address: 0x200015dffb80 with size: 0.000244 MiB 00:09:31.679 element at address: 0x200015dffc80 with size: 0.000244 MiB 00:09:31.679 element at address: 0x200015dfff00 with size: 0.000244 MiB 00:09:31.679 element at address: 0x200015e71880 with size: 0.000244 MiB 00:09:31.679 element at address: 0x200015e71980 with size: 0.000244 MiB 00:09:31.679 element at address: 0x200015e71a80 with size: 0.000244 MiB 00:09:31.679 element at address: 0x200015e71b80 with size: 0.000244 MiB 00:09:31.679 element at address: 0x200015e71c80 with size: 0.000244 MiB 00:09:31.679 element at address: 0x200015e71d80 with size: 0.000244 MiB 00:09:31.679 element at address: 0x200015e71e80 with size: 0.000244 MiB 00:09:31.679 element at address: 0x200015e71f80 with size: 0.000244 MiB 00:09:31.679 element at address: 0x200015e72080 with size: 0.000244 MiB 00:09:31.679 element at address: 0x200015e72180 with size: 0.000244 MiB 00:09:31.679 element at address: 0x200015ef24c0 with size: 0.000244 MiB 00:09:31.679 element at address: 0x20001bcfdd00 with size: 0.000244 MiB 00:09:31.679 element at address: 0x20001c07d3c0 with size: 0.000244 MiB 00:09:31.679 element at address: 0x20001c07d4c0 with size: 0.000244 MiB 00:09:31.679 element at address: 0x20001c07d5c0 with size: 0.000244 MiB 00:09:31.679 element at address: 0x20001c07d6c0 with size: 0.000244 MiB 00:09:31.679 element at address: 0x20001c07d7c0 with size: 0.000244 MiB 00:09:31.679 element at address: 0x20001c07d8c0 with size: 0.000244 MiB 00:09:31.679 element at address: 0x20001c07d9c0 with size: 0.000244 MiB 00:09:31.679 element at address: 0x20001c0fdd00 with size: 0.000244 MiB 00:09:31.679 element at address: 0x20001c4ffc40 with size: 0.000244 MiB 00:09:31.679 element at address: 0x20001c7efbc0 with size: 0.000244 MiB 00:09:31.679 element at address: 0x20001c7efcc0 with size: 0.000244 MiB 00:09:31.679 element at address: 0x20001c8bc680 with size: 0.000244 MiB 00:09:31.679 element at address: 0x20001de8fec0 with size: 0.000244 MiB 00:09:31.679 element at address: 0x20001de8ffc0 with size: 0.000244 MiB 00:09:31.679 element at address: 0x20001de900c0 with size: 0.000244 MiB 00:09:31.679 element at address: 0x20001de901c0 with size: 0.000244 MiB 00:09:31.679 element at address: 0x20001de902c0 with size: 0.000244 MiB 00:09:31.679 element at address: 0x20001de903c0 with size: 0.000244 MiB 00:09:31.679 element at address: 0x20001de904c0 with size: 0.000244 MiB 00:09:31.679 element at address: 0x20001de905c0 with size: 0.000244 MiB 00:09:31.679 element at address: 0x20001de906c0 with size: 0.000244 MiB 00:09:31.679 element at address: 0x20001de907c0 with size: 0.000244 MiB 00:09:31.679 element at address: 0x20001de908c0 with size: 0.000244 MiB 00:09:31.679 element at address: 0x20001de909c0 with size: 0.000244 MiB 00:09:31.679 element at address: 0x20001de90ac0 with size: 0.000244 MiB 00:09:31.679 element at address: 0x20001de90bc0 with size: 0.000244 MiB 00:09:31.679 element at address: 0x20001de90cc0 with size: 0.000244 MiB 00:09:31.679 element at address: 0x20001de90dc0 with size: 0.000244 MiB 00:09:31.679 element at address: 0x20001de90ec0 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20001de90fc0 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20001de910c0 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20001de911c0 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20001de912c0 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20001de913c0 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20001de914c0 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20001de915c0 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20001de916c0 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20001de917c0 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20001de918c0 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20001de919c0 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20001de91ac0 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20001de91bc0 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20001de91cc0 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20001de91dc0 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20001de91ec0 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20001de91fc0 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20001de920c0 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20001de921c0 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20001de922c0 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20001de923c0 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20001de924c0 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20001de925c0 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20001de926c0 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20001de927c0 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20001de928c0 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20001de929c0 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20001de92ac0 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20001de92bc0 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20001de92cc0 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20001de92dc0 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20001de92ec0 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20001de92fc0 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20001de930c0 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20001de931c0 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20001de932c0 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20001de933c0 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20001de934c0 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20001de935c0 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20001de936c0 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20001de937c0 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20001de938c0 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20001de939c0 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20001de93ac0 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20001de93bc0 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20001de93cc0 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20001de93dc0 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20001de93ec0 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20001de93fc0 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20001de940c0 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20001de941c0 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20001de942c0 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20001de943c0 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20001de944c0 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20001de945c0 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20001de946c0 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20001de947c0 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20001de948c0 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20001de949c0 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20001de94ac0 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20001de94bc0 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20001de94cc0 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20001de94dc0 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20001de94ec0 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20001de94fc0 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20001de950c0 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20001de951c0 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20001de952c0 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20001de953c0 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20002b263f40 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20002b264040 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20002b26ad00 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20002b26af80 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20002b26b080 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20002b26b180 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20002b26b280 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20002b26b380 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20002b26b480 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20002b26b580 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20002b26b680 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20002b26b780 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20002b26b880 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20002b26b980 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20002b26ba80 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20002b26bb80 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20002b26bc80 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20002b26bd80 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20002b26be80 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20002b26bf80 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20002b26c080 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20002b26c180 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20002b26c280 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20002b26c380 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20002b26c480 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20002b26c580 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20002b26c680 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20002b26c780 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20002b26c880 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20002b26c980 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20002b26ca80 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20002b26cb80 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20002b26cc80 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20002b26cd80 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20002b26ce80 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20002b26cf80 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20002b26d080 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20002b26d180 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20002b26d280 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20002b26d380 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20002b26d480 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20002b26d580 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20002b26d680 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20002b26d780 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20002b26d880 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20002b26d980 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20002b26da80 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20002b26db80 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20002b26dc80 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20002b26dd80 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20002b26de80 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20002b26df80 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20002b26e080 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20002b26e180 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20002b26e280 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20002b26e380 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20002b26e480 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20002b26e580 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20002b26e680 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20002b26e780 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20002b26e880 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20002b26e980 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20002b26ea80 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20002b26eb80 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20002b26ec80 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20002b26ed80 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20002b26ee80 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20002b26ef80 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20002b26f080 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20002b26f180 with size: 0.000244 MiB 00:09:31.680 element at address: 0x20002b26f280 with size: 0.000244 MiB 00:09:31.681 element at address: 0x20002b26f380 with size: 0.000244 MiB 00:09:31.681 element at address: 0x20002b26f480 with size: 0.000244 MiB 00:09:31.681 element at address: 0x20002b26f580 with size: 0.000244 MiB 00:09:31.681 element at address: 0x20002b26f680 with size: 0.000244 MiB 00:09:31.681 element at address: 0x20002b26f780 with size: 0.000244 MiB 00:09:31.681 element at address: 0x20002b26f880 with size: 0.000244 MiB 00:09:31.681 element at address: 0x20002b26f980 with size: 0.000244 MiB 00:09:31.681 element at address: 0x20002b26fa80 with size: 0.000244 MiB 00:09:31.681 element at address: 0x20002b26fb80 with size: 0.000244 MiB 00:09:31.681 element at address: 0x20002b26fc80 with size: 0.000244 MiB 00:09:31.681 element at address: 0x20002b26fd80 with size: 0.000244 MiB 00:09:31.681 element at address: 0x20002b26fe80 with size: 0.000244 MiB 00:09:31.681 list of memzone associated elements. size: 646.798706 MiB 00:09:31.681 element at address: 0x20001de954c0 with size: 211.416809 MiB 00:09:31.681 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:09:31.681 element at address: 0x20002b26ff80 with size: 157.562622 MiB 00:09:31.681 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:09:31.681 element at address: 0x200015ff4740 with size: 92.045105 MiB 00:09:31.681 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_59410_0 00:09:31.681 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:09:31.681 associated memzone info: size: 48.002930 MiB name: MP_evtpool_59410_0 00:09:31.681 element at address: 0x200003fff340 with size: 48.003113 MiB 00:09:31.681 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59410_0 00:09:31.681 element at address: 0x2000071fdb40 with size: 36.008972 MiB 00:09:31.681 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_59410_0 00:09:31.681 element at address: 0x20001c9be900 with size: 20.255615 MiB 00:09:31.681 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:09:31.681 element at address: 0x2000351feb00 with size: 18.005127 MiB 00:09:31.681 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:09:31.681 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:09:31.681 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_59410 00:09:31.681 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:09:31.681 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59410 00:09:31.681 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:09:31.681 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59410 00:09:31.681 element at address: 0x20001c0fde00 with size: 1.008179 MiB 00:09:31.681 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:09:31.681 element at address: 0x20001c8bc780 with size: 1.008179 MiB 00:09:31.681 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:09:31.681 element at address: 0x20001bcfde00 with size: 1.008179 MiB 00:09:31.681 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:09:31.681 element at address: 0x200015ef25c0 with size: 1.008179 MiB 00:09:31.681 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:09:31.681 element at address: 0x200003eff100 with size: 1.000549 MiB 00:09:31.681 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59410 00:09:31.681 element at address: 0x200003affb80 with size: 1.000549 MiB 00:09:31.681 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59410 00:09:31.681 element at address: 0x20001c4ffd40 with size: 1.000549 MiB 00:09:31.681 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59410 00:09:31.681 element at address: 0x2000350fe8c0 with size: 1.000549 MiB 00:09:31.681 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59410 00:09:31.681 element at address: 0x200003a7f5c0 with size: 0.500549 MiB 00:09:31.681 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_59410 00:09:31.681 element at address: 0x200003e7ecc0 with size: 0.500549 MiB 00:09:31.681 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59410 00:09:31.681 element at address: 0x20001c07dac0 with size: 0.500549 MiB 00:09:31.681 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:09:31.681 element at address: 0x200015e72280 with size: 0.500549 MiB 00:09:31.681 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:09:31.681 element at address: 0x20001c87c440 with size: 0.250549 MiB 00:09:31.681 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:09:31.681 element at address: 0x200003a5e880 with size: 0.125549 MiB 00:09:31.681 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59410 00:09:31.681 element at address: 0x20001bcf5ac0 with size: 0.031799 MiB 00:09:31.681 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:09:31.681 element at address: 0x20002b264140 with size: 0.023804 MiB 00:09:31.681 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:09:31.681 element at address: 0x200003a5a640 with size: 0.016174 MiB 00:09:31.681 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59410 00:09:31.681 element at address: 0x20002b26a2c0 with size: 0.002502 MiB 00:09:31.681 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:09:31.681 element at address: 0x2000002d6080 with size: 0.000366 MiB 00:09:31.681 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59410 00:09:31.681 element at address: 0x200003aff900 with size: 0.000366 MiB 00:09:31.681 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_59410 00:09:31.681 element at address: 0x200015dffd80 with size: 0.000366 MiB 00:09:31.681 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59410 00:09:31.681 element at address: 0x20002b26ae00 with size: 0.000366 MiB 00:09:31.681 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:09:31.681 11:20:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:09:31.681 11:20:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59410 00:09:31.681 11:20:13 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 59410 ']' 00:09:31.681 11:20:13 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 59410 00:09:31.681 11:20:13 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:09:31.681 11:20:13 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:31.681 11:20:13 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59410 00:09:31.681 11:20:13 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:31.681 11:20:13 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:31.681 killing process with pid 59410 00:09:31.681 11:20:13 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59410' 00:09:31.681 11:20:13 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 59410 00:09:31.681 11:20:13 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 59410 00:09:34.212 00:09:34.212 real 0m4.707s 00:09:34.212 user 0m4.526s 00:09:34.212 sys 0m0.719s 00:09:34.212 11:20:15 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:34.212 11:20:15 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:34.212 ************************************ 00:09:34.212 END TEST dpdk_mem_utility 00:09:34.212 ************************************ 00:09:34.470 11:20:15 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:09:34.470 11:20:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:34.470 11:20:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:34.470 11:20:15 -- common/autotest_common.sh@10 -- # set +x 00:09:34.470 ************************************ 00:09:34.471 START TEST event 00:09:34.471 ************************************ 00:09:34.471 11:20:15 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:09:34.471 * Looking for test storage... 00:09:34.471 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:09:34.471 11:20:16 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:34.471 11:20:16 event -- common/autotest_common.sh@1681 -- # lcov --version 00:09:34.471 11:20:16 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:34.729 11:20:16 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:34.729 11:20:16 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:34.729 11:20:16 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:34.729 11:20:16 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:34.729 11:20:16 event -- scripts/common.sh@336 -- # IFS=.-: 00:09:34.729 11:20:16 event -- scripts/common.sh@336 -- # read -ra ver1 00:09:34.729 11:20:16 event -- scripts/common.sh@337 -- # IFS=.-: 00:09:34.729 11:20:16 event -- scripts/common.sh@337 -- # read -ra ver2 00:09:34.729 11:20:16 event -- scripts/common.sh@338 -- # local 'op=<' 00:09:34.729 11:20:16 event -- scripts/common.sh@340 -- # ver1_l=2 00:09:34.729 11:20:16 event -- scripts/common.sh@341 -- # ver2_l=1 00:09:34.729 11:20:16 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:34.729 11:20:16 event -- scripts/common.sh@344 -- # case "$op" in 00:09:34.729 11:20:16 event -- scripts/common.sh@345 -- # : 1 00:09:34.729 11:20:16 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:34.729 11:20:16 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:34.729 11:20:16 event -- scripts/common.sh@365 -- # decimal 1 00:09:34.729 11:20:16 event -- scripts/common.sh@353 -- # local d=1 00:09:34.729 11:20:16 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:34.729 11:20:16 event -- scripts/common.sh@355 -- # echo 1 00:09:34.729 11:20:16 event -- scripts/common.sh@365 -- # ver1[v]=1 00:09:34.729 11:20:16 event -- scripts/common.sh@366 -- # decimal 2 00:09:34.729 11:20:16 event -- scripts/common.sh@353 -- # local d=2 00:09:34.729 11:20:16 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:34.729 11:20:16 event -- scripts/common.sh@355 -- # echo 2 00:09:34.729 11:20:16 event -- scripts/common.sh@366 -- # ver2[v]=2 00:09:34.729 11:20:16 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:34.729 11:20:16 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:34.729 11:20:16 event -- scripts/common.sh@368 -- # return 0 00:09:34.729 11:20:16 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:34.729 11:20:16 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:34.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.729 --rc genhtml_branch_coverage=1 00:09:34.729 --rc genhtml_function_coverage=1 00:09:34.729 --rc genhtml_legend=1 00:09:34.729 --rc geninfo_all_blocks=1 00:09:34.729 --rc geninfo_unexecuted_blocks=1 00:09:34.729 00:09:34.729 ' 00:09:34.729 11:20:16 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:34.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.729 --rc genhtml_branch_coverage=1 00:09:34.729 --rc genhtml_function_coverage=1 00:09:34.729 --rc genhtml_legend=1 00:09:34.729 --rc geninfo_all_blocks=1 00:09:34.729 --rc geninfo_unexecuted_blocks=1 00:09:34.729 00:09:34.729 ' 00:09:34.729 11:20:16 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:34.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.729 --rc genhtml_branch_coverage=1 00:09:34.729 --rc genhtml_function_coverage=1 00:09:34.729 --rc genhtml_legend=1 00:09:34.729 --rc geninfo_all_blocks=1 00:09:34.729 --rc geninfo_unexecuted_blocks=1 00:09:34.729 00:09:34.729 ' 00:09:34.729 11:20:16 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:34.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.729 --rc genhtml_branch_coverage=1 00:09:34.729 --rc genhtml_function_coverage=1 00:09:34.729 --rc genhtml_legend=1 00:09:34.729 --rc geninfo_all_blocks=1 00:09:34.729 --rc geninfo_unexecuted_blocks=1 00:09:34.730 00:09:34.730 ' 00:09:34.730 11:20:16 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:34.730 11:20:16 event -- bdev/nbd_common.sh@6 -- # set -e 00:09:34.730 11:20:16 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:34.730 11:20:16 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:09:34.730 11:20:16 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:34.730 11:20:16 event -- common/autotest_common.sh@10 -- # set +x 00:09:34.730 ************************************ 00:09:34.730 START TEST event_perf 00:09:34.730 ************************************ 00:09:34.730 11:20:16 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:34.730 Running I/O for 1 seconds...[2024-10-07 11:20:16.300899] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:09:34.730 [2024-10-07 11:20:16.301133] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59518 ] 00:09:34.988 [2024-10-07 11:20:16.464849] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:35.246 [2024-10-07 11:20:16.702139] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:09:35.246 [2024-10-07 11:20:16.702365] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:09:35.246 Running I/O for 1 seconds...[2024-10-07 11:20:16.702492] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.246 [2024-10-07 11:20:16.702520] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:09:36.671 00:09:36.671 lcore 0: 102225 00:09:36.671 lcore 1: 102223 00:09:36.671 lcore 2: 102222 00:09:36.671 lcore 3: 102224 00:09:36.671 done. 00:09:36.671 00:09:36.672 real 0m1.857s 00:09:36.672 user 0m4.582s 00:09:36.672 sys 0m0.148s 00:09:36.672 ************************************ 00:09:36.672 END TEST event_perf 00:09:36.672 ************************************ 00:09:36.672 11:20:18 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:36.672 11:20:18 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:09:36.672 11:20:18 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:09:36.672 11:20:18 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:36.672 11:20:18 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:36.672 11:20:18 event -- common/autotest_common.sh@10 -- # set +x 00:09:36.672 ************************************ 00:09:36.672 START TEST event_reactor 00:09:36.672 ************************************ 00:09:36.672 11:20:18 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:09:36.672 [2024-10-07 11:20:18.244826] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:09:36.672 [2024-10-07 11:20:18.245212] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59563 ] 00:09:36.930 [2024-10-07 11:20:18.427935] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.190 [2024-10-07 11:20:18.678876] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.568 test_start 00:09:38.568 oneshot 00:09:38.568 tick 100 00:09:38.568 tick 100 00:09:38.568 tick 250 00:09:38.568 tick 100 00:09:38.568 tick 100 00:09:38.568 tick 100 00:09:38.568 tick 250 00:09:38.568 tick 500 00:09:38.568 tick 100 00:09:38.568 tick 100 00:09:38.568 tick 250 00:09:38.568 tick 100 00:09:38.568 tick 100 00:09:38.568 test_end 00:09:38.568 00:09:38.568 real 0m1.903s 00:09:38.568 user 0m1.655s 00:09:38.569 sys 0m0.137s 00:09:38.569 ************************************ 00:09:38.569 END TEST event_reactor 00:09:38.569 ************************************ 00:09:38.569 11:20:20 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:38.569 11:20:20 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:09:38.569 11:20:20 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:38.569 11:20:20 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:38.569 11:20:20 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:38.569 11:20:20 event -- common/autotest_common.sh@10 -- # set +x 00:09:38.569 ************************************ 00:09:38.569 START TEST event_reactor_perf 00:09:38.569 ************************************ 00:09:38.569 11:20:20 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:38.569 [2024-10-07 11:20:20.221703] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:09:38.569 [2024-10-07 11:20:20.222067] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59605 ] 00:09:38.827 [2024-10-07 11:20:20.396618] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.085 [2024-10-07 11:20:20.634266] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.478 test_start 00:09:40.478 test_end 00:09:40.478 Performance: 348703 events per second 00:09:40.478 00:09:40.478 real 0m1.877s 00:09:40.478 user 0m1.641s 00:09:40.478 sys 0m0.124s 00:09:40.478 11:20:22 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:40.478 ************************************ 00:09:40.478 END TEST event_reactor_perf 00:09:40.478 ************************************ 00:09:40.478 11:20:22 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:09:40.478 11:20:22 event -- event/event.sh@49 -- # uname -s 00:09:40.478 11:20:22 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:09:40.478 11:20:22 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:09:40.478 11:20:22 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:40.478 11:20:22 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:40.478 11:20:22 event -- common/autotest_common.sh@10 -- # set +x 00:09:40.478 ************************************ 00:09:40.478 START TEST event_scheduler 00:09:40.478 ************************************ 00:09:40.478 11:20:22 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:09:40.766 * Looking for test storage... 00:09:40.766 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:09:40.766 11:20:22 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:40.766 11:20:22 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:40.766 11:20:22 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:09:40.766 11:20:22 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:40.766 11:20:22 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:40.766 11:20:22 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:40.766 11:20:22 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:40.766 11:20:22 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:09:40.766 11:20:22 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:09:40.766 11:20:22 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:09:40.766 11:20:22 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:09:40.766 11:20:22 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:09:40.766 11:20:22 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:09:40.766 11:20:22 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:09:40.766 11:20:22 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:40.766 11:20:22 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:09:40.766 11:20:22 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:09:40.766 11:20:22 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:40.766 11:20:22 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:40.766 11:20:22 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:09:40.766 11:20:22 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:09:40.766 11:20:22 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:40.766 11:20:22 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:09:40.766 11:20:22 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:09:40.766 11:20:22 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:09:40.766 11:20:22 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:09:40.766 11:20:22 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:40.767 11:20:22 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:09:40.767 11:20:22 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:09:40.767 11:20:22 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:40.767 11:20:22 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:40.767 11:20:22 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:09:40.767 11:20:22 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:40.767 11:20:22 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:40.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.767 --rc genhtml_branch_coverage=1 00:09:40.767 --rc genhtml_function_coverage=1 00:09:40.767 --rc genhtml_legend=1 00:09:40.767 --rc geninfo_all_blocks=1 00:09:40.767 --rc geninfo_unexecuted_blocks=1 00:09:40.767 00:09:40.767 ' 00:09:40.767 11:20:22 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:40.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.767 --rc genhtml_branch_coverage=1 00:09:40.767 --rc genhtml_function_coverage=1 00:09:40.767 --rc genhtml_legend=1 00:09:40.767 --rc geninfo_all_blocks=1 00:09:40.767 --rc geninfo_unexecuted_blocks=1 00:09:40.767 00:09:40.767 ' 00:09:40.767 11:20:22 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:40.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.767 --rc genhtml_branch_coverage=1 00:09:40.767 --rc genhtml_function_coverage=1 00:09:40.767 --rc genhtml_legend=1 00:09:40.767 --rc geninfo_all_blocks=1 00:09:40.767 --rc geninfo_unexecuted_blocks=1 00:09:40.767 00:09:40.767 ' 00:09:40.767 11:20:22 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:40.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.767 --rc genhtml_branch_coverage=1 00:09:40.767 --rc genhtml_function_coverage=1 00:09:40.767 --rc genhtml_legend=1 00:09:40.767 --rc geninfo_all_blocks=1 00:09:40.767 --rc geninfo_unexecuted_blocks=1 00:09:40.767 00:09:40.767 ' 00:09:40.767 11:20:22 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:09:40.767 11:20:22 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59681 00:09:40.767 11:20:22 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:09:40.767 11:20:22 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:09:40.767 11:20:22 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59681 00:09:40.767 11:20:22 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 59681 ']' 00:09:40.767 11:20:22 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:40.767 11:20:22 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:40.767 11:20:22 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:40.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:40.767 11:20:22 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:40.767 11:20:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:41.025 [2024-10-07 11:20:22.478188] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:09:41.025 [2024-10-07 11:20:22.478553] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59681 ] 00:09:41.025 [2024-10-07 11:20:22.656180] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:41.284 [2024-10-07 11:20:22.964256] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.284 [2024-10-07 11:20:22.964447] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:09:41.284 [2024-10-07 11:20:22.964501] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:09:41.284 [2024-10-07 11:20:22.964814] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:09:41.852 11:20:23 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:41.852 11:20:23 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:09:41.852 11:20:23 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:09:41.852 11:20:23 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.852 11:20:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:41.852 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:41.852 POWER: Cannot set governor of lcore 0 to userspace 00:09:41.852 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:41.852 POWER: Cannot set governor of lcore 0 to performance 00:09:41.852 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:41.852 POWER: Cannot set governor of lcore 0 to userspace 00:09:41.852 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:41.852 POWER: Cannot set governor of lcore 0 to userspace 00:09:41.852 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:09:41.852 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:09:41.852 POWER: Unable to set Power Management Environment for lcore 0 00:09:41.852 [2024-10-07 11:20:23.378796] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:09:41.852 [2024-10-07 11:20:23.378828] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:09:41.852 [2024-10-07 11:20:23.378847] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:09:41.852 [2024-10-07 11:20:23.378872] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:09:41.852 [2024-10-07 11:20:23.378885] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:09:41.852 [2024-10-07 11:20:23.378899] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:09:41.852 11:20:23 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.852 11:20:23 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:09:41.852 11:20:23 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.852 11:20:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:42.109 [2024-10-07 11:20:23.811550] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:09:42.109 11:20:23 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.109 11:20:23 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:09:42.109 11:20:23 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:42.109 11:20:23 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:42.109 11:20:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:42.368 ************************************ 00:09:42.368 START TEST scheduler_create_thread 00:09:42.368 ************************************ 00:09:42.368 11:20:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:09:42.368 11:20:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:09:42.368 11:20:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.368 11:20:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:42.368 2 00:09:42.368 11:20:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.368 11:20:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:09:42.368 11:20:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.368 11:20:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:42.368 3 00:09:42.368 11:20:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.368 11:20:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:09:42.368 11:20:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.368 11:20:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:42.368 4 00:09:42.368 11:20:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.368 11:20:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:09:42.368 11:20:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.368 11:20:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:42.368 5 00:09:42.368 11:20:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.368 11:20:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:09:42.368 11:20:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.368 11:20:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:42.368 6 00:09:42.368 11:20:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.368 11:20:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:09:42.368 11:20:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.368 11:20:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:42.368 7 00:09:42.368 11:20:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.368 11:20:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:09:42.368 11:20:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.368 11:20:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:42.368 8 00:09:42.368 11:20:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.368 11:20:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:09:42.368 11:20:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.368 11:20:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:42.368 9 00:09:42.368 11:20:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.368 11:20:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:09:42.368 11:20:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.368 11:20:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:42.368 10 00:09:42.368 11:20:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.368 11:20:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:09:42.368 11:20:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.368 11:20:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:42.368 11:20:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.368 11:20:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:09:42.368 11:20:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:09:42.368 11:20:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.368 11:20:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:43.305 11:20:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.305 11:20:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:09:43.305 11:20:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.305 11:20:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:44.682 11:20:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.682 11:20:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:09:44.682 11:20:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:09:44.682 11:20:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.682 11:20:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:45.633 ************************************ 00:09:45.633 END TEST scheduler_create_thread 00:09:45.633 ************************************ 00:09:45.633 11:20:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.633 00:09:45.633 real 0m3.379s 00:09:45.633 user 0m0.025s 00:09:45.633 sys 0m0.007s 00:09:45.633 11:20:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:45.633 11:20:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:45.633 11:20:27 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:09:45.633 11:20:27 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59681 00:09:45.633 11:20:27 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 59681 ']' 00:09:45.633 11:20:27 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 59681 00:09:45.633 11:20:27 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:09:45.633 11:20:27 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:45.633 11:20:27 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59681 00:09:45.633 killing process with pid 59681 00:09:45.633 11:20:27 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:09:45.633 11:20:27 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:09:45.633 11:20:27 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59681' 00:09:45.633 11:20:27 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 59681 00:09:45.633 11:20:27 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 59681 00:09:45.892 [2024-10-07 11:20:27.583797] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:09:47.790 00:09:47.790 real 0m6.885s 00:09:47.790 user 0m13.035s 00:09:47.790 sys 0m0.647s 00:09:47.790 11:20:29 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:47.790 ************************************ 00:09:47.790 END TEST event_scheduler 00:09:47.790 ************************************ 00:09:47.790 11:20:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:47.790 11:20:29 event -- event/event.sh@51 -- # modprobe -n nbd 00:09:47.790 11:20:29 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:09:47.790 11:20:29 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:47.790 11:20:29 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:47.790 11:20:29 event -- common/autotest_common.sh@10 -- # set +x 00:09:47.790 ************************************ 00:09:47.790 START TEST app_repeat 00:09:47.790 ************************************ 00:09:47.790 11:20:29 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:09:47.790 11:20:29 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:47.790 11:20:29 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:47.790 11:20:29 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:09:47.790 11:20:29 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:47.790 11:20:29 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:09:47.790 11:20:29 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:09:47.790 11:20:29 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:09:47.790 11:20:29 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59804 00:09:47.790 11:20:29 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:09:47.790 11:20:29 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:09:47.790 Process app_repeat pid: 59804 00:09:47.790 11:20:29 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59804' 00:09:47.790 11:20:29 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:47.790 spdk_app_start Round 0 00:09:47.790 11:20:29 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:09:47.790 11:20:29 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59804 /var/tmp/spdk-nbd.sock 00:09:47.790 11:20:29 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 59804 ']' 00:09:47.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:47.790 11:20:29 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:47.790 11:20:29 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:47.790 11:20:29 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:47.790 11:20:29 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:47.790 11:20:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:47.790 [2024-10-07 11:20:29.169952] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:09:47.790 [2024-10-07 11:20:29.170104] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59804 ] 00:09:47.790 [2024-10-07 11:20:29.331118] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:48.048 [2024-10-07 11:20:29.558484] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:09:48.048 [2024-10-07 11:20:29.558486] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.615 11:20:30 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:48.615 11:20:30 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:09:48.615 11:20:30 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:48.873 Malloc0 00:09:48.873 11:20:30 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:49.132 Malloc1 00:09:49.132 11:20:30 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:49.132 11:20:30 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:49.132 11:20:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:49.132 11:20:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:49.132 11:20:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:49.132 11:20:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:49.132 11:20:30 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:49.132 11:20:30 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:49.132 11:20:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:49.132 11:20:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:49.132 11:20:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:49.132 11:20:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:49.132 11:20:30 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:49.132 11:20:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:49.132 11:20:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:49.132 11:20:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:49.391 /dev/nbd0 00:09:49.391 11:20:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:49.391 11:20:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:49.391 11:20:30 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:09:49.391 11:20:30 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:09:49.391 11:20:30 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:49.391 11:20:30 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:49.391 11:20:30 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:09:49.391 11:20:30 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:09:49.391 11:20:30 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:49.391 11:20:30 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:49.391 11:20:30 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:49.391 1+0 records in 00:09:49.391 1+0 records out 00:09:49.391 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000389975 s, 10.5 MB/s 00:09:49.391 11:20:30 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:49.391 11:20:30 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:09:49.391 11:20:30 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:49.391 11:20:30 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:49.391 11:20:30 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:09:49.391 11:20:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:49.391 11:20:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:49.391 11:20:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:49.650 /dev/nbd1 00:09:49.650 11:20:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:49.650 11:20:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:49.650 11:20:31 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:09:49.650 11:20:31 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:09:49.650 11:20:31 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:49.650 11:20:31 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:49.650 11:20:31 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:09:49.650 11:20:31 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:09:49.650 11:20:31 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:49.650 11:20:31 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:49.650 11:20:31 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:49.650 1+0 records in 00:09:49.650 1+0 records out 00:09:49.650 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00052015 s, 7.9 MB/s 00:09:49.650 11:20:31 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:49.650 11:20:31 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:09:49.650 11:20:31 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:49.650 11:20:31 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:49.650 11:20:31 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:09:49.650 11:20:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:49.650 11:20:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:49.650 11:20:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:49.650 11:20:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:49.650 11:20:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:49.908 11:20:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:49.908 { 00:09:49.908 "nbd_device": "/dev/nbd0", 00:09:49.908 "bdev_name": "Malloc0" 00:09:49.908 }, 00:09:49.908 { 00:09:49.908 "nbd_device": "/dev/nbd1", 00:09:49.908 "bdev_name": "Malloc1" 00:09:49.908 } 00:09:49.908 ]' 00:09:49.908 11:20:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:49.908 { 00:09:49.908 "nbd_device": "/dev/nbd0", 00:09:49.908 "bdev_name": "Malloc0" 00:09:49.908 }, 00:09:49.908 { 00:09:49.908 "nbd_device": "/dev/nbd1", 00:09:49.908 "bdev_name": "Malloc1" 00:09:49.908 } 00:09:49.908 ]' 00:09:49.908 11:20:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:49.908 11:20:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:49.908 /dev/nbd1' 00:09:49.908 11:20:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:49.908 11:20:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:49.908 /dev/nbd1' 00:09:49.908 11:20:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:49.908 11:20:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:49.908 11:20:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:49.908 11:20:31 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:49.908 11:20:31 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:49.908 11:20:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:49.908 11:20:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:49.908 11:20:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:49.908 11:20:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:49.908 11:20:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:49.908 11:20:31 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:49.908 256+0 records in 00:09:49.908 256+0 records out 00:09:49.908 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0150193 s, 69.8 MB/s 00:09:49.908 11:20:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:49.908 11:20:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:49.908 256+0 records in 00:09:49.908 256+0 records out 00:09:49.908 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0302305 s, 34.7 MB/s 00:09:49.908 11:20:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:49.908 11:20:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:50.167 256+0 records in 00:09:50.167 256+0 records out 00:09:50.167 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0366692 s, 28.6 MB/s 00:09:50.167 11:20:31 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:50.167 11:20:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:50.167 11:20:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:50.167 11:20:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:50.167 11:20:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:50.167 11:20:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:50.167 11:20:31 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:50.167 11:20:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:50.167 11:20:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:50.167 11:20:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:50.167 11:20:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:50.167 11:20:31 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:50.167 11:20:31 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:50.167 11:20:31 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:50.167 11:20:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:50.167 11:20:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:50.167 11:20:31 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:50.167 11:20:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:50.167 11:20:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:50.438 11:20:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:50.438 11:20:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:50.438 11:20:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:50.438 11:20:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:50.438 11:20:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:50.438 11:20:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:50.438 11:20:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:50.438 11:20:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:50.438 11:20:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:50.438 11:20:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:50.726 11:20:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:50.726 11:20:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:50.726 11:20:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:50.726 11:20:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:50.726 11:20:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:50.726 11:20:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:50.726 11:20:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:50.726 11:20:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:50.726 11:20:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:50.726 11:20:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:50.726 11:20:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:50.726 11:20:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:50.726 11:20:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:50.726 11:20:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:50.985 11:20:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:50.985 11:20:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:50.985 11:20:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:50.985 11:20:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:50.985 11:20:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:50.985 11:20:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:50.985 11:20:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:50.985 11:20:32 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:50.985 11:20:32 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:50.985 11:20:32 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:51.244 11:20:32 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:52.621 [2024-10-07 11:20:34.304859] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:52.911 [2024-10-07 11:20:34.527508] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:09:52.911 [2024-10-07 11:20:34.527509] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.169 [2024-10-07 11:20:34.741601] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:53.169 [2024-10-07 11:20:34.741715] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:54.554 spdk_app_start Round 1 00:09:54.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:54.554 11:20:35 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:54.554 11:20:35 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:09:54.554 11:20:35 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59804 /var/tmp/spdk-nbd.sock 00:09:54.554 11:20:35 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 59804 ']' 00:09:54.554 11:20:35 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:54.554 11:20:35 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:54.554 11:20:35 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:54.554 11:20:35 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:54.554 11:20:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:54.554 11:20:36 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:54.554 11:20:36 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:09:54.554 11:20:36 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:54.813 Malloc0 00:09:54.813 11:20:36 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:55.072 Malloc1 00:09:55.072 11:20:36 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:55.072 11:20:36 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:55.072 11:20:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:55.072 11:20:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:55.072 11:20:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:55.072 11:20:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:55.072 11:20:36 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:55.072 11:20:36 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:55.072 11:20:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:55.072 11:20:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:55.072 11:20:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:55.072 11:20:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:55.072 11:20:36 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:55.072 11:20:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:55.072 11:20:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:55.072 11:20:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:55.331 /dev/nbd0 00:09:55.331 11:20:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:55.331 11:20:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:55.331 11:20:37 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:09:55.331 11:20:37 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:09:55.331 11:20:37 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:55.331 11:20:37 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:55.331 11:20:37 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:09:55.331 11:20:37 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:09:55.331 11:20:37 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:55.331 11:20:37 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:55.331 11:20:37 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:55.331 1+0 records in 00:09:55.331 1+0 records out 00:09:55.331 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000297021 s, 13.8 MB/s 00:09:55.331 11:20:37 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:55.331 11:20:37 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:09:55.331 11:20:37 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:55.331 11:20:37 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:55.331 11:20:37 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:09:55.331 11:20:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:55.331 11:20:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:55.332 11:20:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:55.590 /dev/nbd1 00:09:55.590 11:20:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:55.590 11:20:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:55.591 11:20:37 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:09:55.591 11:20:37 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:09:55.591 11:20:37 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:55.591 11:20:37 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:55.591 11:20:37 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:09:55.591 11:20:37 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:09:55.591 11:20:37 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:55.591 11:20:37 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:55.591 11:20:37 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:55.591 1+0 records in 00:09:55.591 1+0 records out 00:09:55.591 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000397602 s, 10.3 MB/s 00:09:55.591 11:20:37 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:55.591 11:20:37 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:09:55.591 11:20:37 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:55.591 11:20:37 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:55.591 11:20:37 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:09:55.591 11:20:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:55.591 11:20:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:55.591 11:20:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:55.591 11:20:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:55.591 11:20:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:55.850 11:20:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:55.850 { 00:09:55.850 "nbd_device": "/dev/nbd0", 00:09:55.850 "bdev_name": "Malloc0" 00:09:55.850 }, 00:09:55.850 { 00:09:55.850 "nbd_device": "/dev/nbd1", 00:09:55.850 "bdev_name": "Malloc1" 00:09:55.850 } 00:09:55.850 ]' 00:09:55.850 11:20:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:55.850 11:20:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:55.850 { 00:09:55.850 "nbd_device": "/dev/nbd0", 00:09:55.850 "bdev_name": "Malloc0" 00:09:55.850 }, 00:09:55.850 { 00:09:55.850 "nbd_device": "/dev/nbd1", 00:09:55.850 "bdev_name": "Malloc1" 00:09:55.850 } 00:09:55.850 ]' 00:09:55.850 11:20:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:55.850 /dev/nbd1' 00:09:56.109 11:20:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:56.109 11:20:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:56.109 /dev/nbd1' 00:09:56.109 11:20:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:56.109 11:20:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:56.110 11:20:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:56.110 11:20:37 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:56.110 11:20:37 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:56.110 11:20:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:56.110 11:20:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:56.110 11:20:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:56.110 11:20:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:56.110 11:20:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:56.110 11:20:37 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:56.110 256+0 records in 00:09:56.110 256+0 records out 00:09:56.110 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0120985 s, 86.7 MB/s 00:09:56.110 11:20:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:56.110 11:20:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:56.110 256+0 records in 00:09:56.110 256+0 records out 00:09:56.110 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0272535 s, 38.5 MB/s 00:09:56.110 11:20:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:56.110 11:20:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:56.110 256+0 records in 00:09:56.110 256+0 records out 00:09:56.110 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0369732 s, 28.4 MB/s 00:09:56.110 11:20:37 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:56.110 11:20:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:56.110 11:20:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:56.110 11:20:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:56.110 11:20:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:56.110 11:20:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:56.110 11:20:37 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:56.110 11:20:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:56.110 11:20:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:56.110 11:20:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:56.110 11:20:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:56.110 11:20:37 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:56.110 11:20:37 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:56.110 11:20:37 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:56.110 11:20:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:56.110 11:20:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:56.110 11:20:37 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:56.110 11:20:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:56.110 11:20:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:56.408 11:20:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:56.408 11:20:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:56.408 11:20:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:56.408 11:20:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:56.408 11:20:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:56.408 11:20:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:56.408 11:20:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:56.408 11:20:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:56.408 11:20:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:56.408 11:20:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:56.668 11:20:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:56.668 11:20:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:56.668 11:20:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:56.668 11:20:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:56.668 11:20:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:56.668 11:20:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:56.668 11:20:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:56.668 11:20:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:56.668 11:20:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:56.668 11:20:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:56.668 11:20:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:56.928 11:20:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:56.928 11:20:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:56.928 11:20:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:56.928 11:20:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:56.928 11:20:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:56.928 11:20:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:56.928 11:20:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:56.928 11:20:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:56.928 11:20:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:56.928 11:20:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:56.928 11:20:38 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:56.928 11:20:38 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:56.928 11:20:38 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:57.496 11:20:38 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:58.906 [2024-10-07 11:20:40.390518] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:58.906 [2024-10-07 11:20:40.610925] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.906 [2024-10-07 11:20:40.610942] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:09:59.163 [2024-10-07 11:20:40.820205] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:59.163 [2024-10-07 11:20:40.820315] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:00.536 spdk_app_start Round 2 00:10:00.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:00.536 11:20:42 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:00.536 11:20:42 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:10:00.536 11:20:42 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59804 /var/tmp/spdk-nbd.sock 00:10:00.536 11:20:42 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 59804 ']' 00:10:00.536 11:20:42 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:00.537 11:20:42 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:00.537 11:20:42 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:00.537 11:20:42 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:00.537 11:20:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:00.537 11:20:42 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:00.537 11:20:42 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:10:00.537 11:20:42 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:01.103 Malloc0 00:10:01.103 11:20:42 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:01.361 Malloc1 00:10:01.361 11:20:42 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:01.361 11:20:42 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:01.361 11:20:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:01.361 11:20:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:01.361 11:20:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:01.361 11:20:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:01.361 11:20:42 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:01.361 11:20:42 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:01.361 11:20:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:01.361 11:20:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:01.361 11:20:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:01.361 11:20:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:01.361 11:20:42 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:01.361 11:20:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:01.361 11:20:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:01.361 11:20:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:01.621 /dev/nbd0 00:10:01.621 11:20:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:01.621 11:20:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:01.621 11:20:43 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:10:01.621 11:20:43 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:10:01.621 11:20:43 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:01.621 11:20:43 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:01.621 11:20:43 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:10:01.621 11:20:43 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:10:01.621 11:20:43 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:01.621 11:20:43 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:01.621 11:20:43 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:01.621 1+0 records in 00:10:01.621 1+0 records out 00:10:01.621 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000325267 s, 12.6 MB/s 00:10:01.621 11:20:43 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:01.621 11:20:43 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:10:01.621 11:20:43 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:01.621 11:20:43 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:01.621 11:20:43 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:10:01.621 11:20:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:01.621 11:20:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:01.621 11:20:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:01.880 /dev/nbd1 00:10:01.880 11:20:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:01.880 11:20:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:01.880 11:20:43 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:10:01.880 11:20:43 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:10:01.880 11:20:43 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:01.880 11:20:43 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:01.880 11:20:43 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:10:01.880 11:20:43 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:10:01.880 11:20:43 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:01.880 11:20:43 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:01.880 11:20:43 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:01.880 1+0 records in 00:10:01.880 1+0 records out 00:10:01.880 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000499633 s, 8.2 MB/s 00:10:01.880 11:20:43 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:01.880 11:20:43 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:10:01.880 11:20:43 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:02.138 11:20:43 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:02.138 11:20:43 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:10:02.138 11:20:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:02.138 11:20:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:02.138 11:20:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:02.138 11:20:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:02.138 11:20:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:02.398 11:20:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:02.398 { 00:10:02.398 "nbd_device": "/dev/nbd0", 00:10:02.398 "bdev_name": "Malloc0" 00:10:02.398 }, 00:10:02.398 { 00:10:02.398 "nbd_device": "/dev/nbd1", 00:10:02.398 "bdev_name": "Malloc1" 00:10:02.398 } 00:10:02.398 ]' 00:10:02.398 11:20:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:02.398 { 00:10:02.398 "nbd_device": "/dev/nbd0", 00:10:02.398 "bdev_name": "Malloc0" 00:10:02.398 }, 00:10:02.398 { 00:10:02.398 "nbd_device": "/dev/nbd1", 00:10:02.398 "bdev_name": "Malloc1" 00:10:02.398 } 00:10:02.398 ]' 00:10:02.398 11:20:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:02.398 11:20:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:02.398 /dev/nbd1' 00:10:02.398 11:20:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:02.398 /dev/nbd1' 00:10:02.398 11:20:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:02.398 11:20:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:02.398 11:20:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:02.398 11:20:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:02.398 11:20:43 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:02.398 11:20:43 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:02.398 11:20:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:02.398 11:20:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:02.398 11:20:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:02.398 11:20:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:02.398 11:20:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:02.398 11:20:43 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:02.398 256+0 records in 00:10:02.398 256+0 records out 00:10:02.398 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0122763 s, 85.4 MB/s 00:10:02.398 11:20:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:02.398 11:20:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:02.398 256+0 records in 00:10:02.398 256+0 records out 00:10:02.398 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0344198 s, 30.5 MB/s 00:10:02.398 11:20:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:02.398 11:20:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:02.398 256+0 records in 00:10:02.398 256+0 records out 00:10:02.398 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.038552 s, 27.2 MB/s 00:10:02.398 11:20:44 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:02.398 11:20:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:02.398 11:20:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:02.398 11:20:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:02.398 11:20:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:02.398 11:20:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:02.398 11:20:44 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:02.398 11:20:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:02.398 11:20:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:10:02.398 11:20:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:02.398 11:20:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:10:02.399 11:20:44 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:02.399 11:20:44 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:02.399 11:20:44 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:02.399 11:20:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:02.399 11:20:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:02.399 11:20:44 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:02.399 11:20:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:02.399 11:20:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:02.969 11:20:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:02.969 11:20:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:02.969 11:20:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:02.969 11:20:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:02.969 11:20:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:02.969 11:20:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:02.969 11:20:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:02.969 11:20:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:02.969 11:20:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:02.969 11:20:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:03.227 11:20:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:03.227 11:20:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:03.227 11:20:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:03.227 11:20:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:03.227 11:20:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:03.227 11:20:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:03.227 11:20:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:03.227 11:20:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:03.227 11:20:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:03.227 11:20:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:03.227 11:20:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:03.486 11:20:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:03.486 11:20:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:03.486 11:20:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:03.486 11:20:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:03.486 11:20:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:03.486 11:20:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:03.486 11:20:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:03.486 11:20:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:03.486 11:20:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:03.486 11:20:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:03.486 11:20:45 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:03.486 11:20:45 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:03.486 11:20:45 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:04.053 11:20:45 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:05.428 [2024-10-07 11:20:46.923396] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:05.686 [2024-10-07 11:20:47.149116] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:10:05.686 [2024-10-07 11:20:47.149117] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.686 [2024-10-07 11:20:47.356879] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:05.686 [2024-10-07 11:20:47.356955] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:07.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:07.061 11:20:48 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59804 /var/tmp/spdk-nbd.sock 00:10:07.061 11:20:48 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 59804 ']' 00:10:07.061 11:20:48 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:07.061 11:20:48 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:07.061 11:20:48 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:07.061 11:20:48 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:07.061 11:20:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:07.320 11:20:48 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:07.320 11:20:48 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:10:07.320 11:20:48 event.app_repeat -- event/event.sh@39 -- # killprocess 59804 00:10:07.320 11:20:48 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 59804 ']' 00:10:07.320 11:20:48 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 59804 00:10:07.320 11:20:48 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:10:07.320 11:20:48 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:07.320 11:20:48 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59804 00:10:07.320 killing process with pid 59804 00:10:07.320 11:20:48 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:07.320 11:20:48 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:07.320 11:20:48 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59804' 00:10:07.320 11:20:48 event.app_repeat -- common/autotest_common.sh@969 -- # kill 59804 00:10:07.320 11:20:48 event.app_repeat -- common/autotest_common.sh@974 -- # wait 59804 00:10:08.698 spdk_app_start is called in Round 0. 00:10:08.698 Shutdown signal received, stop current app iteration 00:10:08.698 Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 reinitialization... 00:10:08.698 spdk_app_start is called in Round 1. 00:10:08.698 Shutdown signal received, stop current app iteration 00:10:08.698 Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 reinitialization... 00:10:08.698 spdk_app_start is called in Round 2. 00:10:08.698 Shutdown signal received, stop current app iteration 00:10:08.698 Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 reinitialization... 00:10:08.698 spdk_app_start is called in Round 3. 00:10:08.698 Shutdown signal received, stop current app iteration 00:10:08.698 11:20:50 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:10:08.698 11:20:50 event.app_repeat -- event/event.sh@42 -- # return 0 00:10:08.698 00:10:08.698 real 0m21.005s 00:10:08.698 user 0m44.005s 00:10:08.698 sys 0m3.605s 00:10:08.698 11:20:50 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:08.698 11:20:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:08.698 ************************************ 00:10:08.698 END TEST app_repeat 00:10:08.698 ************************************ 00:10:08.698 11:20:50 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:10:08.698 11:20:50 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:10:08.698 11:20:50 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:08.698 11:20:50 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:08.698 11:20:50 event -- common/autotest_common.sh@10 -- # set +x 00:10:08.698 ************************************ 00:10:08.698 START TEST cpu_locks 00:10:08.698 ************************************ 00:10:08.698 11:20:50 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:10:08.698 * Looking for test storage... 00:10:08.698 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:10:08.698 11:20:50 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:08.698 11:20:50 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:10:08.698 11:20:50 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:08.956 11:20:50 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:08.956 11:20:50 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:08.956 11:20:50 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:08.956 11:20:50 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:08.956 11:20:50 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:10:08.956 11:20:50 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:10:08.956 11:20:50 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:10:08.956 11:20:50 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:10:08.956 11:20:50 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:10:08.956 11:20:50 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:10:08.956 11:20:50 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:10:08.956 11:20:50 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:08.956 11:20:50 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:10:08.956 11:20:50 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:10:08.956 11:20:50 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:08.956 11:20:50 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:08.956 11:20:50 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:10:08.956 11:20:50 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:10:08.956 11:20:50 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:08.956 11:20:50 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:10:08.956 11:20:50 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:10:08.956 11:20:50 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:10:08.956 11:20:50 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:10:08.956 11:20:50 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:08.956 11:20:50 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:10:08.956 11:20:50 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:10:08.956 11:20:50 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:08.956 11:20:50 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:08.956 11:20:50 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:10:08.956 11:20:50 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:08.956 11:20:50 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:08.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.957 --rc genhtml_branch_coverage=1 00:10:08.957 --rc genhtml_function_coverage=1 00:10:08.957 --rc genhtml_legend=1 00:10:08.957 --rc geninfo_all_blocks=1 00:10:08.957 --rc geninfo_unexecuted_blocks=1 00:10:08.957 00:10:08.957 ' 00:10:08.957 11:20:50 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:08.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.957 --rc genhtml_branch_coverage=1 00:10:08.957 --rc genhtml_function_coverage=1 00:10:08.957 --rc genhtml_legend=1 00:10:08.957 --rc geninfo_all_blocks=1 00:10:08.957 --rc geninfo_unexecuted_blocks=1 00:10:08.957 00:10:08.957 ' 00:10:08.957 11:20:50 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:08.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.957 --rc genhtml_branch_coverage=1 00:10:08.957 --rc genhtml_function_coverage=1 00:10:08.957 --rc genhtml_legend=1 00:10:08.957 --rc geninfo_all_blocks=1 00:10:08.957 --rc geninfo_unexecuted_blocks=1 00:10:08.957 00:10:08.957 ' 00:10:08.957 11:20:50 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:08.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.957 --rc genhtml_branch_coverage=1 00:10:08.957 --rc genhtml_function_coverage=1 00:10:08.957 --rc genhtml_legend=1 00:10:08.957 --rc geninfo_all_blocks=1 00:10:08.957 --rc geninfo_unexecuted_blocks=1 00:10:08.957 00:10:08.957 ' 00:10:08.957 11:20:50 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:10:08.957 11:20:50 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:10:08.957 11:20:50 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:10:08.957 11:20:50 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:10:08.957 11:20:50 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:08.957 11:20:50 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:08.957 11:20:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:08.957 ************************************ 00:10:08.957 START TEST default_locks 00:10:08.957 ************************************ 00:10:08.957 11:20:50 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:10:08.957 11:20:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60273 00:10:08.957 11:20:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:08.957 11:20:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60273 00:10:08.957 11:20:50 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 60273 ']' 00:10:08.957 11:20:50 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:08.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:08.957 11:20:50 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:08.957 11:20:50 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:08.957 11:20:50 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:08.957 11:20:50 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:08.957 [2024-10-07 11:20:50.590188] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:10:08.957 [2024-10-07 11:20:50.590395] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60273 ] 00:10:09.215 [2024-10-07 11:20:50.771977] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.474 [2024-10-07 11:20:51.005323] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.426 11:20:52 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:10.427 11:20:52 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:10:10.427 11:20:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60273 00:10:10.427 11:20:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:10.427 11:20:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60273 00:10:10.994 11:20:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60273 00:10:10.994 11:20:52 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 60273 ']' 00:10:10.994 11:20:52 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 60273 00:10:10.994 11:20:52 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:10:10.994 11:20:52 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:10.994 11:20:52 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60273 00:10:10.994 killing process with pid 60273 00:10:10.994 11:20:52 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:10.994 11:20:52 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:10.994 11:20:52 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60273' 00:10:10.994 11:20:52 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 60273 00:10:10.994 11:20:52 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 60273 00:10:14.278 11:20:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60273 00:10:14.278 11:20:55 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:10:14.278 11:20:55 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 60273 00:10:14.279 11:20:55 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:10:14.279 11:20:55 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:14.279 11:20:55 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:10:14.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:14.279 ERROR: process (pid: 60273) is no longer running 00:10:14.279 11:20:55 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:14.279 11:20:55 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 60273 00:10:14.279 11:20:55 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 60273 ']' 00:10:14.279 11:20:55 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:14.279 11:20:55 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:14.279 11:20:55 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:14.279 11:20:55 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:14.279 11:20:55 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:14.279 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (60273) - No such process 00:10:14.279 11:20:55 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:14.279 11:20:55 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:10:14.279 11:20:55 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:10:14.279 11:20:55 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:14.279 11:20:55 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:14.279 11:20:55 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:14.279 11:20:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:10:14.279 11:20:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:10:14.279 11:20:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:10:14.279 11:20:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:10:14.279 00:10:14.279 real 0m4.965s 00:10:14.279 user 0m4.912s 00:10:14.279 sys 0m0.877s 00:10:14.279 11:20:55 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:14.279 11:20:55 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:14.279 ************************************ 00:10:14.279 END TEST default_locks 00:10:14.279 ************************************ 00:10:14.279 11:20:55 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:10:14.279 11:20:55 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:14.279 11:20:55 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:14.279 11:20:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:14.279 ************************************ 00:10:14.279 START TEST default_locks_via_rpc 00:10:14.279 ************************************ 00:10:14.279 11:20:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:10:14.279 11:20:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60355 00:10:14.279 11:20:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60355 00:10:14.279 11:20:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:14.279 11:20:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 60355 ']' 00:10:14.279 11:20:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:14.279 11:20:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:14.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:14.279 11:20:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:14.279 11:20:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:14.279 11:20:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:14.279 [2024-10-07 11:20:55.607599] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:10:14.279 [2024-10-07 11:20:55.608093] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60355 ] 00:10:14.279 [2024-10-07 11:20:55.789785] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:14.537 [2024-10-07 11:20:56.027987] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.474 11:20:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:15.474 11:20:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:10:15.474 11:20:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:10:15.474 11:20:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.474 11:20:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:15.474 11:20:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.474 11:20:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:10:15.474 11:20:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:10:15.474 11:20:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:10:15.474 11:20:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:10:15.474 11:20:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:10:15.474 11:20:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.474 11:20:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:15.474 11:20:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.474 11:20:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60355 00:10:15.474 11:20:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60355 00:10:15.474 11:20:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:16.050 11:20:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60355 00:10:16.050 11:20:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 60355 ']' 00:10:16.050 11:20:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 60355 00:10:16.050 11:20:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:10:16.050 11:20:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:16.050 11:20:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60355 00:10:16.050 killing process with pid 60355 00:10:16.050 11:20:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:16.050 11:20:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:16.050 11:20:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60355' 00:10:16.050 11:20:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 60355 00:10:16.050 11:20:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 60355 00:10:18.585 00:10:18.585 real 0m4.659s 00:10:18.585 user 0m4.703s 00:10:18.585 sys 0m0.774s 00:10:18.585 11:21:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:18.585 11:21:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.585 ************************************ 00:10:18.585 END TEST default_locks_via_rpc 00:10:18.585 ************************************ 00:10:18.585 11:21:00 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:10:18.585 11:21:00 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:18.585 11:21:00 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:18.585 11:21:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:18.585 ************************************ 00:10:18.585 START TEST non_locking_app_on_locked_coremask 00:10:18.585 ************************************ 00:10:18.585 11:21:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:10:18.585 11:21:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60440 00:10:18.585 11:21:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:18.585 11:21:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60440 /var/tmp/spdk.sock 00:10:18.585 11:21:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60440 ']' 00:10:18.585 11:21:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:18.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:18.585 11:21:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:18.585 11:21:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:18.585 11:21:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:18.585 11:21:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:18.859 [2024-10-07 11:21:00.351097] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:10:18.859 [2024-10-07 11:21:00.351258] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60440 ] 00:10:18.859 [2024-10-07 11:21:00.530651] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.147 [2024-10-07 11:21:00.766695] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.084 11:21:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:20.084 11:21:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:10:20.084 11:21:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:10:20.084 11:21:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60456 00:10:20.084 11:21:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60456 /var/tmp/spdk2.sock 00:10:20.084 11:21:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60456 ']' 00:10:20.084 11:21:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:20.084 11:21:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:20.084 11:21:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:20.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:20.084 11:21:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:20.084 11:21:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:20.344 [2024-10-07 11:21:01.841727] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:10:20.344 [2024-10-07 11:21:01.842681] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60456 ] 00:10:20.344 [2024-10-07 11:21:02.053671] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:20.344 [2024-10-07 11:21:02.053752] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.913 [2024-10-07 11:21:02.516968] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.444 11:21:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:23.444 11:21:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:10:23.444 11:21:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60440 00:10:23.444 11:21:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60440 00:10:23.444 11:21:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:24.380 11:21:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60440 00:10:24.380 11:21:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60440 ']' 00:10:24.380 11:21:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 60440 00:10:24.380 11:21:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:10:24.380 11:21:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:24.380 11:21:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60440 00:10:24.380 killing process with pid 60440 00:10:24.380 11:21:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:24.380 11:21:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:24.380 11:21:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60440' 00:10:24.380 11:21:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 60440 00:10:24.380 11:21:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 60440 00:10:29.651 11:21:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60456 00:10:29.651 11:21:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60456 ']' 00:10:29.651 11:21:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 60456 00:10:29.651 11:21:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:10:29.651 11:21:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:29.651 11:21:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60456 00:10:29.651 killing process with pid 60456 00:10:29.651 11:21:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:29.651 11:21:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:29.651 11:21:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60456' 00:10:29.651 11:21:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 60456 00:10:29.651 11:21:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 60456 00:10:32.960 ************************************ 00:10:32.960 END TEST non_locking_app_on_locked_coremask 00:10:32.960 ************************************ 00:10:32.960 00:10:32.960 real 0m13.746s 00:10:32.960 user 0m14.219s 00:10:32.960 sys 0m1.790s 00:10:32.960 11:21:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:32.960 11:21:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:32.960 11:21:14 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:10:32.960 11:21:14 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:32.960 11:21:14 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:32.960 11:21:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:32.960 ************************************ 00:10:32.960 START TEST locking_app_on_unlocked_coremask 00:10:32.960 ************************************ 00:10:32.960 11:21:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:10:32.960 11:21:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60626 00:10:32.960 11:21:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60626 /var/tmp/spdk.sock 00:10:32.960 11:21:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60626 ']' 00:10:32.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:32.960 11:21:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:32.960 11:21:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:32.960 11:21:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:32.960 11:21:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:32.960 11:21:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:32.960 11:21:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:10:32.960 [2024-10-07 11:21:14.160328] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:10:32.960 [2024-10-07 11:21:14.160778] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60626 ] 00:10:32.960 [2024-10-07 11:21:14.341696] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:32.960 [2024-10-07 11:21:14.341795] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:32.960 [2024-10-07 11:21:14.562619] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:33.895 11:21:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:33.895 11:21:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:10:33.895 11:21:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60647 00:10:33.895 11:21:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60647 /var/tmp/spdk2.sock 00:10:33.895 11:21:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60647 ']' 00:10:33.895 11:21:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:33.895 11:21:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:33.895 11:21:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:10:33.895 11:21:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:33.895 11:21:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:33.895 11:21:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:33.895 [2024-10-07 11:21:15.564288] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:10:33.895 [2024-10-07 11:21:15.564598] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60647 ] 00:10:34.153 [2024-10-07 11:21:15.731559] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:34.720 [2024-10-07 11:21:16.165698] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.624 11:21:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:36.624 11:21:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:10:36.624 11:21:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60647 00:10:36.624 11:21:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60647 00:10:36.624 11:21:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:37.560 11:21:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60626 00:10:37.560 11:21:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60626 ']' 00:10:37.560 11:21:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 60626 00:10:37.560 11:21:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:10:37.560 11:21:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:37.560 11:21:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60626 00:10:37.560 killing process with pid 60626 00:10:37.560 11:21:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:37.560 11:21:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:37.560 11:21:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60626' 00:10:37.560 11:21:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 60626 00:10:37.560 11:21:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 60626 00:10:44.125 11:21:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60647 00:10:44.125 11:21:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60647 ']' 00:10:44.125 11:21:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 60647 00:10:44.125 11:21:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:10:44.125 11:21:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:44.125 11:21:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60647 00:10:44.125 killing process with pid 60647 00:10:44.125 11:21:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:44.125 11:21:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:44.125 11:21:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60647' 00:10:44.125 11:21:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 60647 00:10:44.125 11:21:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 60647 00:10:46.660 00:10:46.660 real 0m14.151s 00:10:46.660 user 0m14.382s 00:10:46.660 sys 0m1.682s 00:10:46.660 11:21:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:46.660 11:21:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:46.660 ************************************ 00:10:46.660 END TEST locking_app_on_unlocked_coremask 00:10:46.660 ************************************ 00:10:46.660 11:21:28 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:10:46.660 11:21:28 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:46.660 11:21:28 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:46.660 11:21:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:46.660 ************************************ 00:10:46.660 START TEST locking_app_on_locked_coremask 00:10:46.660 ************************************ 00:10:46.660 11:21:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:10:46.660 11:21:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60818 00:10:46.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:46.660 11:21:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60818 /var/tmp/spdk.sock 00:10:46.660 11:21:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60818 ']' 00:10:46.660 11:21:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:46.660 11:21:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:46.660 11:21:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:46.660 11:21:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:46.660 11:21:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:46.660 11:21:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:46.919 [2024-10-07 11:21:28.385793] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:10:46.919 [2024-10-07 11:21:28.385929] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60818 ] 00:10:46.919 [2024-10-07 11:21:28.558334] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.180 [2024-10-07 11:21:28.842678] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.579 11:21:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:48.579 11:21:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:10:48.579 11:21:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60845 00:10:48.579 11:21:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60845 /var/tmp/spdk2.sock 00:10:48.579 11:21:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:10:48.579 11:21:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 60845 /var/tmp/spdk2.sock 00:10:48.579 11:21:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:10:48.579 11:21:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:10:48.579 11:21:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:48.579 11:21:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:10:48.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:48.579 11:21:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:48.579 11:21:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 60845 /var/tmp/spdk2.sock 00:10:48.579 11:21:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60845 ']' 00:10:48.579 11:21:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:48.579 11:21:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:48.579 11:21:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:48.579 11:21:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:48.579 11:21:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:48.579 [2024-10-07 11:21:30.033483] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:10:48.579 [2024-10-07 11:21:30.033619] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60845 ] 00:10:48.579 [2024-10-07 11:21:30.207725] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60818 has claimed it. 00:10:48.579 [2024-10-07 11:21:30.211846] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:49.149 ERROR: process (pid: 60845) is no longer running 00:10:49.149 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (60845) - No such process 00:10:49.149 11:21:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:49.149 11:21:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:10:49.149 11:21:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:10:49.150 11:21:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:49.150 11:21:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:49.150 11:21:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:49.150 11:21:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60818 00:10:49.150 11:21:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60818 00:10:49.150 11:21:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:49.718 11:21:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60818 00:10:49.718 11:21:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60818 ']' 00:10:49.718 11:21:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 60818 00:10:49.719 11:21:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:10:49.719 11:21:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:49.719 11:21:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60818 00:10:49.719 killing process with pid 60818 00:10:49.719 11:21:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:49.719 11:21:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:49.719 11:21:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60818' 00:10:49.719 11:21:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 60818 00:10:49.719 11:21:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 60818 00:10:53.008 ************************************ 00:10:53.008 END TEST locking_app_on_locked_coremask 00:10:53.008 00:10:53.008 real 0m5.887s 00:10:53.008 user 0m5.937s 00:10:53.008 sys 0m1.210s 00:10:53.008 11:21:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:53.008 11:21:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:53.008 ************************************ 00:10:53.008 11:21:34 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:10:53.008 11:21:34 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:53.008 11:21:34 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:53.008 11:21:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:53.008 ************************************ 00:10:53.008 START TEST locking_overlapped_coremask 00:10:53.008 ************************************ 00:10:53.008 11:21:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:10:53.008 11:21:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60920 00:10:53.008 11:21:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:10:53.008 11:21:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60920 /var/tmp/spdk.sock 00:10:53.008 11:21:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 60920 ']' 00:10:53.008 11:21:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:53.008 11:21:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:53.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:53.008 11:21:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:53.008 11:21:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:53.008 11:21:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:53.008 [2024-10-07 11:21:34.356985] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:10:53.008 [2024-10-07 11:21:34.357183] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60920 ] 00:10:53.008 [2024-10-07 11:21:34.542513] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:53.265 [2024-10-07 11:21:34.865371] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:10:53.265 [2024-10-07 11:21:34.865472] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.265 [2024-10-07 11:21:34.865498] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:10:54.202 11:21:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:54.202 11:21:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:10:54.202 11:21:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60938 00:10:54.202 11:21:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60938 /var/tmp/spdk2.sock 00:10:54.202 11:21:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:10:54.202 11:21:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 60938 /var/tmp/spdk2.sock 00:10:54.202 11:21:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:10:54.202 11:21:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:10:54.202 11:21:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:54.202 11:21:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:10:54.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:54.202 11:21:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:54.202 11:21:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 60938 /var/tmp/spdk2.sock 00:10:54.202 11:21:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 60938 ']' 00:10:54.202 11:21:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:54.202 11:21:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:54.202 11:21:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:54.202 11:21:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:54.202 11:21:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:54.461 [2024-10-07 11:21:35.935797] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:10:54.461 [2024-10-07 11:21:35.937152] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60938 ] 00:10:54.461 [2024-10-07 11:21:36.138079] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60920 has claimed it. 00:10:54.461 [2024-10-07 11:21:36.138165] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:55.027 ERROR: process (pid: 60938) is no longer running 00:10:55.027 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (60938) - No such process 00:10:55.027 11:21:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:55.027 11:21:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:10:55.027 11:21:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:10:55.027 11:21:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:55.027 11:21:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:55.027 11:21:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:55.028 11:21:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:10:55.028 11:21:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:55.028 11:21:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:55.028 11:21:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:55.028 11:21:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60920 00:10:55.028 11:21:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 60920 ']' 00:10:55.028 11:21:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 60920 00:10:55.028 11:21:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:10:55.028 11:21:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:55.028 11:21:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60920 00:10:55.028 killing process with pid 60920 00:10:55.028 11:21:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:55.028 11:21:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:55.028 11:21:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60920' 00:10:55.028 11:21:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 60920 00:10:55.028 11:21:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 60920 00:10:58.311 ************************************ 00:10:58.311 END TEST locking_overlapped_coremask 00:10:58.311 ************************************ 00:10:58.311 00:10:58.311 real 0m5.040s 00:10:58.311 user 0m13.083s 00:10:58.311 sys 0m0.777s 00:10:58.311 11:21:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:58.311 11:21:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:58.311 11:21:39 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:10:58.311 11:21:39 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:58.311 11:21:39 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:58.311 11:21:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:58.311 ************************************ 00:10:58.311 START TEST locking_overlapped_coremask_via_rpc 00:10:58.311 ************************************ 00:10:58.311 11:21:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:10:58.311 11:21:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=61008 00:10:58.311 11:21:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 61008 /var/tmp/spdk.sock 00:10:58.312 11:21:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:10:58.312 11:21:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 61008 ']' 00:10:58.312 11:21:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:58.312 11:21:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:58.312 11:21:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:58.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:58.312 11:21:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:58.312 11:21:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:58.312 [2024-10-07 11:21:39.462663] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:10:58.312 [2024-10-07 11:21:39.462810] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61008 ] 00:10:58.312 [2024-10-07 11:21:39.626328] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:58.312 [2024-10-07 11:21:39.626387] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:58.312 [2024-10-07 11:21:39.871017] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:10:58.312 [2024-10-07 11:21:39.871119] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.312 [2024-10-07 11:21:39.871140] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:10:59.246 11:21:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:59.246 11:21:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:10:59.246 11:21:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=61031 00:10:59.246 11:21:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 61031 /var/tmp/spdk2.sock 00:10:59.246 11:21:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:10:59.246 11:21:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 61031 ']' 00:10:59.246 11:21:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:59.246 11:21:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:59.246 11:21:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:59.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:59.246 11:21:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:59.246 11:21:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.505 [2024-10-07 11:21:41.001522] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:10:59.505 [2024-10-07 11:21:41.001890] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61031 ] 00:10:59.505 [2024-10-07 11:21:41.175020] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:59.505 [2024-10-07 11:21:41.175087] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:00.072 [2024-10-07 11:21:41.734336] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:11:00.072 [2024-10-07 11:21:41.739798] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:11:00.072 [2024-10-07 11:21:41.739799] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:11:02.603 11:21:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:02.603 11:21:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:11:02.603 11:21:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:11:02.603 11:21:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.603 11:21:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:02.603 11:21:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.603 11:21:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:02.603 11:21:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:11:02.603 11:21:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:02.603 11:21:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:02.603 11:21:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:02.603 11:21:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:02.603 11:21:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:02.603 11:21:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:02.603 11:21:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.603 11:21:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:02.603 [2024-10-07 11:21:43.861065] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61008 has claimed it. 00:11:02.603 request: 00:11:02.603 { 00:11:02.603 "method": "framework_enable_cpumask_locks", 00:11:02.603 "req_id": 1 00:11:02.603 } 00:11:02.603 Got JSON-RPC error response 00:11:02.603 response: 00:11:02.603 { 00:11:02.603 "code": -32603, 00:11:02.603 "message": "Failed to claim CPU core: 2" 00:11:02.603 } 00:11:02.603 11:21:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:02.603 11:21:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:11:02.603 11:21:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:02.603 11:21:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:02.603 11:21:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:02.603 11:21:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 61008 /var/tmp/spdk.sock 00:11:02.603 11:21:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 61008 ']' 00:11:02.603 11:21:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:02.603 11:21:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:02.603 11:21:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:02.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:02.603 11:21:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:02.603 11:21:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:02.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:02.603 11:21:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:02.603 11:21:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:11:02.603 11:21:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 61031 /var/tmp/spdk2.sock 00:11:02.603 11:21:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 61031 ']' 00:11:02.603 11:21:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:02.603 11:21:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:02.603 11:21:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:02.603 11:21:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:02.603 11:21:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:02.863 11:21:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:02.863 11:21:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:11:02.863 11:21:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:11:02.863 11:21:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:11:02.863 11:21:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:11:02.863 11:21:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:11:02.863 00:11:02.863 real 0m5.131s 00:11:02.863 user 0m1.609s 00:11:02.863 sys 0m0.250s 00:11:02.863 11:21:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:02.863 11:21:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:02.863 ************************************ 00:11:02.863 END TEST locking_overlapped_coremask_via_rpc 00:11:02.863 ************************************ 00:11:02.863 11:21:44 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:11:02.863 11:21:44 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61008 ]] 00:11:02.863 11:21:44 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61008 00:11:02.863 11:21:44 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 61008 ']' 00:11:02.863 11:21:44 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 61008 00:11:02.863 11:21:44 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:11:02.863 11:21:44 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:02.863 11:21:44 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61008 00:11:03.122 11:21:44 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:03.122 11:21:44 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:03.122 11:21:44 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61008' 00:11:03.122 killing process with pid 61008 00:11:03.122 11:21:44 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 61008 00:11:03.122 11:21:44 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 61008 00:11:05.653 11:21:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61031 ]] 00:11:05.653 11:21:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61031 00:11:05.653 11:21:47 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 61031 ']' 00:11:05.653 11:21:47 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 61031 00:11:05.653 11:21:47 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:11:05.653 11:21:47 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:05.653 11:21:47 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61031 00:11:05.653 11:21:47 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:11:05.653 killing process with pid 61031 00:11:05.653 11:21:47 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:11:05.653 11:21:47 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61031' 00:11:05.653 11:21:47 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 61031 00:11:05.653 11:21:47 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 61031 00:11:08.932 11:21:50 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:11:08.932 11:21:50 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:11:08.932 Process with pid 61008 is not found 00:11:08.932 11:21:50 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61008 ]] 00:11:08.932 11:21:50 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61008 00:11:08.932 11:21:50 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 61008 ']' 00:11:08.932 11:21:50 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 61008 00:11:08.932 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (61008) - No such process 00:11:08.932 11:21:50 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 61008 is not found' 00:11:08.932 11:21:50 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61031 ]] 00:11:08.932 11:21:50 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61031 00:11:08.932 11:21:50 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 61031 ']' 00:11:08.932 11:21:50 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 61031 00:11:08.932 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (61031) - No such process 00:11:08.932 11:21:50 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 61031 is not found' 00:11:08.932 Process with pid 61031 is not found 00:11:08.932 11:21:50 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:11:08.932 ************************************ 00:11:08.932 END TEST cpu_locks 00:11:08.932 ************************************ 00:11:08.932 00:11:08.932 real 1m0.225s 00:11:08.932 user 1m39.642s 00:11:08.932 sys 0m8.883s 00:11:08.932 11:21:50 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:08.932 11:21:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:08.932 ************************************ 00:11:08.932 END TEST event 00:11:08.932 ************************************ 00:11:08.932 00:11:08.932 real 1m34.481s 00:11:08.932 user 2m44.823s 00:11:08.932 sys 0m13.991s 00:11:08.932 11:21:50 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:08.932 11:21:50 event -- common/autotest_common.sh@10 -- # set +x 00:11:08.932 11:21:50 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:11:08.932 11:21:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:08.933 11:21:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:08.933 11:21:50 -- common/autotest_common.sh@10 -- # set +x 00:11:08.933 ************************************ 00:11:08.933 START TEST thread 00:11:08.933 ************************************ 00:11:08.933 11:21:50 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:11:09.190 * Looking for test storage... 00:11:09.190 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:11:09.190 11:21:50 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:09.190 11:21:50 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:11:09.190 11:21:50 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:09.190 11:21:50 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:09.190 11:21:50 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:09.190 11:21:50 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:09.190 11:21:50 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:09.190 11:21:50 thread -- scripts/common.sh@336 -- # IFS=.-: 00:11:09.190 11:21:50 thread -- scripts/common.sh@336 -- # read -ra ver1 00:11:09.190 11:21:50 thread -- scripts/common.sh@337 -- # IFS=.-: 00:11:09.190 11:21:50 thread -- scripts/common.sh@337 -- # read -ra ver2 00:11:09.190 11:21:50 thread -- scripts/common.sh@338 -- # local 'op=<' 00:11:09.190 11:21:50 thread -- scripts/common.sh@340 -- # ver1_l=2 00:11:09.190 11:21:50 thread -- scripts/common.sh@341 -- # ver2_l=1 00:11:09.190 11:21:50 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:09.190 11:21:50 thread -- scripts/common.sh@344 -- # case "$op" in 00:11:09.190 11:21:50 thread -- scripts/common.sh@345 -- # : 1 00:11:09.190 11:21:50 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:09.190 11:21:50 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:09.190 11:21:50 thread -- scripts/common.sh@365 -- # decimal 1 00:11:09.190 11:21:50 thread -- scripts/common.sh@353 -- # local d=1 00:11:09.190 11:21:50 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:09.190 11:21:50 thread -- scripts/common.sh@355 -- # echo 1 00:11:09.190 11:21:50 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:11:09.190 11:21:50 thread -- scripts/common.sh@366 -- # decimal 2 00:11:09.190 11:21:50 thread -- scripts/common.sh@353 -- # local d=2 00:11:09.190 11:21:50 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:09.190 11:21:50 thread -- scripts/common.sh@355 -- # echo 2 00:11:09.190 11:21:50 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:11:09.190 11:21:50 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:09.190 11:21:50 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:09.190 11:21:50 thread -- scripts/common.sh@368 -- # return 0 00:11:09.190 11:21:50 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:09.190 11:21:50 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:09.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.191 --rc genhtml_branch_coverage=1 00:11:09.191 --rc genhtml_function_coverage=1 00:11:09.191 --rc genhtml_legend=1 00:11:09.191 --rc geninfo_all_blocks=1 00:11:09.191 --rc geninfo_unexecuted_blocks=1 00:11:09.191 00:11:09.191 ' 00:11:09.191 11:21:50 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:09.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.191 --rc genhtml_branch_coverage=1 00:11:09.191 --rc genhtml_function_coverage=1 00:11:09.191 --rc genhtml_legend=1 00:11:09.191 --rc geninfo_all_blocks=1 00:11:09.191 --rc geninfo_unexecuted_blocks=1 00:11:09.191 00:11:09.191 ' 00:11:09.191 11:21:50 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:09.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.191 --rc genhtml_branch_coverage=1 00:11:09.191 --rc genhtml_function_coverage=1 00:11:09.191 --rc genhtml_legend=1 00:11:09.191 --rc geninfo_all_blocks=1 00:11:09.191 --rc geninfo_unexecuted_blocks=1 00:11:09.191 00:11:09.191 ' 00:11:09.191 11:21:50 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:09.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.191 --rc genhtml_branch_coverage=1 00:11:09.191 --rc genhtml_function_coverage=1 00:11:09.191 --rc genhtml_legend=1 00:11:09.191 --rc geninfo_all_blocks=1 00:11:09.191 --rc geninfo_unexecuted_blocks=1 00:11:09.191 00:11:09.191 ' 00:11:09.191 11:21:50 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:11:09.191 11:21:50 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:11:09.191 11:21:50 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:09.191 11:21:50 thread -- common/autotest_common.sh@10 -- # set +x 00:11:09.191 ************************************ 00:11:09.191 START TEST thread_poller_perf 00:11:09.191 ************************************ 00:11:09.191 11:21:50 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:11:09.191 [2024-10-07 11:21:50.811940] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:11:09.191 [2024-10-07 11:21:50.812290] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61243 ] 00:11:09.449 [2024-10-07 11:21:51.000131] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:09.708 [2024-10-07 11:21:51.256059] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.708 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:11:11.082 [2024-10-07T11:21:52.793Z] ====================================== 00:11:11.082 [2024-10-07T11:21:52.793Z] busy:2498432082 (cyc) 00:11:11.082 [2024-10-07T11:21:52.793Z] total_run_count: 387000 00:11:11.082 [2024-10-07T11:21:52.793Z] tsc_hz: 2490000000 (cyc) 00:11:11.082 [2024-10-07T11:21:52.793Z] ====================================== 00:11:11.082 [2024-10-07T11:21:52.793Z] poller_cost: 6455 (cyc), 2592 (nsec) 00:11:11.082 00:11:11.082 real 0m1.916s 00:11:11.082 user 0m1.684s 00:11:11.082 sys 0m0.120s 00:11:11.082 11:21:52 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:11.082 11:21:52 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:11:11.082 ************************************ 00:11:11.082 END TEST thread_poller_perf 00:11:11.082 ************************************ 00:11:11.082 11:21:52 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:11:11.082 11:21:52 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:11:11.082 11:21:52 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:11.082 11:21:52 thread -- common/autotest_common.sh@10 -- # set +x 00:11:11.082 ************************************ 00:11:11.082 START TEST thread_poller_perf 00:11:11.082 ************************************ 00:11:11.082 11:21:52 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:11:11.361 [2024-10-07 11:21:52.809127] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:11:11.361 [2024-10-07 11:21:52.809451] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61285 ] 00:11:11.361 [2024-10-07 11:21:52.982619] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.620 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:11:11.620 [2024-10-07 11:21:53.207517] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.997 [2024-10-07T11:21:54.708Z] ====================================== 00:11:12.997 [2024-10-07T11:21:54.708Z] busy:2494409086 (cyc) 00:11:12.997 [2024-10-07T11:21:54.708Z] total_run_count: 4941000 00:11:12.997 [2024-10-07T11:21:54.708Z] tsc_hz: 2490000000 (cyc) 00:11:12.997 [2024-10-07T11:21:54.708Z] ====================================== 00:11:12.997 [2024-10-07T11:21:54.708Z] poller_cost: 504 (cyc), 202 (nsec) 00:11:12.997 00:11:12.997 real 0m1.892s 00:11:12.997 user 0m1.648s 00:11:12.997 sys 0m0.134s 00:11:12.997 11:21:54 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:12.997 11:21:54 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:11:12.997 ************************************ 00:11:12.997 END TEST thread_poller_perf 00:11:12.997 ************************************ 00:11:13.276 11:21:54 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:11:13.276 00:11:13.276 real 0m4.164s 00:11:13.276 user 0m3.501s 00:11:13.276 sys 0m0.439s 00:11:13.276 ************************************ 00:11:13.276 END TEST thread 00:11:13.276 ************************************ 00:11:13.276 11:21:54 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:13.276 11:21:54 thread -- common/autotest_common.sh@10 -- # set +x 00:11:13.276 11:21:54 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:11:13.276 11:21:54 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:11:13.276 11:21:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:13.276 11:21:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:13.276 11:21:54 -- common/autotest_common.sh@10 -- # set +x 00:11:13.276 ************************************ 00:11:13.276 START TEST app_cmdline 00:11:13.276 ************************************ 00:11:13.276 11:21:54 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:11:13.276 * Looking for test storage... 00:11:13.276 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:11:13.276 11:21:54 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:13.276 11:21:54 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:11:13.276 11:21:54 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:13.586 11:21:54 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:13.586 11:21:54 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:13.586 11:21:54 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:13.586 11:21:54 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:13.586 11:21:54 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:11:13.586 11:21:54 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:11:13.586 11:21:54 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:11:13.586 11:21:54 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:11:13.586 11:21:54 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:11:13.586 11:21:54 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:11:13.586 11:21:54 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:11:13.586 11:21:54 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:13.586 11:21:54 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:11:13.586 11:21:54 app_cmdline -- scripts/common.sh@345 -- # : 1 00:11:13.586 11:21:54 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:13.586 11:21:54 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:13.586 11:21:54 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:11:13.586 11:21:54 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:11:13.586 11:21:54 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:13.586 11:21:54 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:11:13.586 11:21:54 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:11:13.586 11:21:55 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:11:13.586 11:21:55 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:11:13.586 11:21:55 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:13.586 11:21:55 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:11:13.586 11:21:55 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:11:13.586 11:21:55 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:13.586 11:21:55 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:13.586 11:21:55 app_cmdline -- scripts/common.sh@368 -- # return 0 00:11:13.586 11:21:55 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:13.586 11:21:55 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:13.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.586 --rc genhtml_branch_coverage=1 00:11:13.586 --rc genhtml_function_coverage=1 00:11:13.586 --rc genhtml_legend=1 00:11:13.586 --rc geninfo_all_blocks=1 00:11:13.586 --rc geninfo_unexecuted_blocks=1 00:11:13.586 00:11:13.586 ' 00:11:13.586 11:21:55 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:13.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.586 --rc genhtml_branch_coverage=1 00:11:13.586 --rc genhtml_function_coverage=1 00:11:13.586 --rc genhtml_legend=1 00:11:13.586 --rc geninfo_all_blocks=1 00:11:13.586 --rc geninfo_unexecuted_blocks=1 00:11:13.586 00:11:13.586 ' 00:11:13.586 11:21:55 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:13.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.586 --rc genhtml_branch_coverage=1 00:11:13.586 --rc genhtml_function_coverage=1 00:11:13.586 --rc genhtml_legend=1 00:11:13.586 --rc geninfo_all_blocks=1 00:11:13.586 --rc geninfo_unexecuted_blocks=1 00:11:13.586 00:11:13.586 ' 00:11:13.586 11:21:55 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:13.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.586 --rc genhtml_branch_coverage=1 00:11:13.586 --rc genhtml_function_coverage=1 00:11:13.586 --rc genhtml_legend=1 00:11:13.586 --rc geninfo_all_blocks=1 00:11:13.586 --rc geninfo_unexecuted_blocks=1 00:11:13.586 00:11:13.586 ' 00:11:13.586 11:21:55 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:11:13.586 11:21:55 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=61374 00:11:13.586 11:21:55 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:11:13.586 11:21:55 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 61374 00:11:13.586 11:21:55 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 61374 ']' 00:11:13.586 11:21:55 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:13.586 11:21:55 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:13.586 11:21:55 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:13.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:13.586 11:21:55 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:13.586 11:21:55 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:13.586 [2024-10-07 11:21:55.125718] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:11:13.586 [2024-10-07 11:21:55.126096] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61374 ] 00:11:13.844 [2024-10-07 11:21:55.304209] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.844 [2024-10-07 11:21:55.521074] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.779 11:21:56 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:14.779 11:21:56 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:11:14.779 11:21:56 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:11:15.037 { 00:11:15.037 "version": "SPDK v25.01-pre git sha1 d16db39ee", 00:11:15.038 "fields": { 00:11:15.038 "major": 25, 00:11:15.038 "minor": 1, 00:11:15.038 "patch": 0, 00:11:15.038 "suffix": "-pre", 00:11:15.038 "commit": "d16db39ee" 00:11:15.038 } 00:11:15.038 } 00:11:15.038 11:21:56 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:11:15.038 11:21:56 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:11:15.038 11:21:56 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:11:15.038 11:21:56 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:11:15.038 11:21:56 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:11:15.038 11:21:56 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:11:15.038 11:21:56 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.038 11:21:56 app_cmdline -- app/cmdline.sh@26 -- # sort 00:11:15.038 11:21:56 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:15.038 11:21:56 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.038 11:21:56 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:11:15.038 11:21:56 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:11:15.038 11:21:56 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:15.038 11:21:56 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:11:15.038 11:21:56 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:15.038 11:21:56 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:15.038 11:21:56 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:15.038 11:21:56 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:15.296 11:21:56 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:15.296 11:21:56 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:15.296 11:21:56 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:15.296 11:21:56 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:15.296 11:21:56 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:11:15.297 11:21:56 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:15.297 request: 00:11:15.297 { 00:11:15.297 "method": "env_dpdk_get_mem_stats", 00:11:15.297 "req_id": 1 00:11:15.297 } 00:11:15.297 Got JSON-RPC error response 00:11:15.297 response: 00:11:15.297 { 00:11:15.297 "code": -32601, 00:11:15.297 "message": "Method not found" 00:11:15.297 } 00:11:15.297 11:21:56 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:11:15.297 11:21:56 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:15.297 11:21:56 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:15.297 11:21:56 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:15.297 11:21:56 app_cmdline -- app/cmdline.sh@1 -- # killprocess 61374 00:11:15.297 11:21:56 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 61374 ']' 00:11:15.297 11:21:56 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 61374 00:11:15.297 11:21:57 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:11:15.297 11:21:57 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:15.555 11:21:57 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61374 00:11:15.555 killing process with pid 61374 00:11:15.555 11:21:57 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:15.555 11:21:57 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:15.555 11:21:57 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61374' 00:11:15.555 11:21:57 app_cmdline -- common/autotest_common.sh@969 -- # kill 61374 00:11:15.555 11:21:57 app_cmdline -- common/autotest_common.sh@974 -- # wait 61374 00:11:18.145 00:11:18.145 real 0m5.006s 00:11:18.145 user 0m5.232s 00:11:18.145 sys 0m0.683s 00:11:18.145 11:21:59 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:18.145 ************************************ 00:11:18.145 END TEST app_cmdline 00:11:18.145 ************************************ 00:11:18.145 11:21:59 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:18.145 11:21:59 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:11:18.145 11:21:59 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:18.145 11:21:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:18.145 11:21:59 -- common/autotest_common.sh@10 -- # set +x 00:11:18.145 ************************************ 00:11:18.145 START TEST version 00:11:18.145 ************************************ 00:11:18.145 11:21:59 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:11:18.404 * Looking for test storage... 00:11:18.404 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:11:18.404 11:21:59 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:18.404 11:21:59 version -- common/autotest_common.sh@1681 -- # lcov --version 00:11:18.404 11:21:59 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:18.404 11:22:00 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:18.404 11:22:00 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:18.404 11:22:00 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:18.404 11:22:00 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:18.404 11:22:00 version -- scripts/common.sh@336 -- # IFS=.-: 00:11:18.404 11:22:00 version -- scripts/common.sh@336 -- # read -ra ver1 00:11:18.404 11:22:00 version -- scripts/common.sh@337 -- # IFS=.-: 00:11:18.404 11:22:00 version -- scripts/common.sh@337 -- # read -ra ver2 00:11:18.404 11:22:00 version -- scripts/common.sh@338 -- # local 'op=<' 00:11:18.404 11:22:00 version -- scripts/common.sh@340 -- # ver1_l=2 00:11:18.404 11:22:00 version -- scripts/common.sh@341 -- # ver2_l=1 00:11:18.404 11:22:00 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:18.404 11:22:00 version -- scripts/common.sh@344 -- # case "$op" in 00:11:18.404 11:22:00 version -- scripts/common.sh@345 -- # : 1 00:11:18.404 11:22:00 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:18.404 11:22:00 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:18.404 11:22:00 version -- scripts/common.sh@365 -- # decimal 1 00:11:18.404 11:22:00 version -- scripts/common.sh@353 -- # local d=1 00:11:18.404 11:22:00 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:18.404 11:22:00 version -- scripts/common.sh@355 -- # echo 1 00:11:18.404 11:22:00 version -- scripts/common.sh@365 -- # ver1[v]=1 00:11:18.404 11:22:00 version -- scripts/common.sh@366 -- # decimal 2 00:11:18.404 11:22:00 version -- scripts/common.sh@353 -- # local d=2 00:11:18.404 11:22:00 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:18.404 11:22:00 version -- scripts/common.sh@355 -- # echo 2 00:11:18.404 11:22:00 version -- scripts/common.sh@366 -- # ver2[v]=2 00:11:18.404 11:22:00 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:18.404 11:22:00 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:18.404 11:22:00 version -- scripts/common.sh@368 -- # return 0 00:11:18.404 11:22:00 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:18.404 11:22:00 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:18.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.404 --rc genhtml_branch_coverage=1 00:11:18.404 --rc genhtml_function_coverage=1 00:11:18.404 --rc genhtml_legend=1 00:11:18.404 --rc geninfo_all_blocks=1 00:11:18.405 --rc geninfo_unexecuted_blocks=1 00:11:18.405 00:11:18.405 ' 00:11:18.405 11:22:00 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:18.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.405 --rc genhtml_branch_coverage=1 00:11:18.405 --rc genhtml_function_coverage=1 00:11:18.405 --rc genhtml_legend=1 00:11:18.405 --rc geninfo_all_blocks=1 00:11:18.405 --rc geninfo_unexecuted_blocks=1 00:11:18.405 00:11:18.405 ' 00:11:18.405 11:22:00 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:18.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.405 --rc genhtml_branch_coverage=1 00:11:18.405 --rc genhtml_function_coverage=1 00:11:18.405 --rc genhtml_legend=1 00:11:18.405 --rc geninfo_all_blocks=1 00:11:18.405 --rc geninfo_unexecuted_blocks=1 00:11:18.405 00:11:18.405 ' 00:11:18.405 11:22:00 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:18.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.405 --rc genhtml_branch_coverage=1 00:11:18.405 --rc genhtml_function_coverage=1 00:11:18.405 --rc genhtml_legend=1 00:11:18.405 --rc geninfo_all_blocks=1 00:11:18.405 --rc geninfo_unexecuted_blocks=1 00:11:18.405 00:11:18.405 ' 00:11:18.405 11:22:00 version -- app/version.sh@17 -- # get_header_version major 00:11:18.405 11:22:00 version -- app/version.sh@14 -- # cut -f2 00:11:18.405 11:22:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:18.405 11:22:00 version -- app/version.sh@14 -- # tr -d '"' 00:11:18.405 11:22:00 version -- app/version.sh@17 -- # major=25 00:11:18.405 11:22:00 version -- app/version.sh@18 -- # get_header_version minor 00:11:18.405 11:22:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:18.405 11:22:00 version -- app/version.sh@14 -- # cut -f2 00:11:18.405 11:22:00 version -- app/version.sh@14 -- # tr -d '"' 00:11:18.405 11:22:00 version -- app/version.sh@18 -- # minor=1 00:11:18.405 11:22:00 version -- app/version.sh@19 -- # get_header_version patch 00:11:18.405 11:22:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:18.405 11:22:00 version -- app/version.sh@14 -- # cut -f2 00:11:18.405 11:22:00 version -- app/version.sh@14 -- # tr -d '"' 00:11:18.664 11:22:00 version -- app/version.sh@19 -- # patch=0 00:11:18.664 11:22:00 version -- app/version.sh@20 -- # get_header_version suffix 00:11:18.664 11:22:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:18.664 11:22:00 version -- app/version.sh@14 -- # tr -d '"' 00:11:18.664 11:22:00 version -- app/version.sh@14 -- # cut -f2 00:11:18.664 11:22:00 version -- app/version.sh@20 -- # suffix=-pre 00:11:18.664 11:22:00 version -- app/version.sh@22 -- # version=25.1 00:11:18.664 11:22:00 version -- app/version.sh@25 -- # (( patch != 0 )) 00:11:18.664 11:22:00 version -- app/version.sh@28 -- # version=25.1rc0 00:11:18.664 11:22:00 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:11:18.664 11:22:00 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:11:18.664 11:22:00 version -- app/version.sh@30 -- # py_version=25.1rc0 00:11:18.664 11:22:00 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:11:18.664 ************************************ 00:11:18.664 END TEST version 00:11:18.664 ************************************ 00:11:18.664 00:11:18.664 real 0m0.330s 00:11:18.664 user 0m0.183s 00:11:18.664 sys 0m0.189s 00:11:18.664 11:22:00 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:18.664 11:22:00 version -- common/autotest_common.sh@10 -- # set +x 00:11:18.664 11:22:00 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:11:18.664 11:22:00 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:11:18.664 11:22:00 -- spdk/autotest.sh@194 -- # uname -s 00:11:18.664 11:22:00 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:11:18.664 11:22:00 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:11:18.664 11:22:00 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:11:18.664 11:22:00 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:11:18.664 11:22:00 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:11:18.664 11:22:00 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:18.664 11:22:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:18.664 11:22:00 -- common/autotest_common.sh@10 -- # set +x 00:11:18.664 ************************************ 00:11:18.664 START TEST blockdev_nvme 00:11:18.664 ************************************ 00:11:18.664 11:22:00 blockdev_nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:11:18.923 * Looking for test storage... 00:11:18.923 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:11:18.923 11:22:00 blockdev_nvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:18.923 11:22:00 blockdev_nvme -- common/autotest_common.sh@1681 -- # lcov --version 00:11:18.923 11:22:00 blockdev_nvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:18.923 11:22:00 blockdev_nvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:18.923 11:22:00 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:18.923 11:22:00 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:18.923 11:22:00 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:18.923 11:22:00 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:11:18.923 11:22:00 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:11:18.923 11:22:00 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:11:18.923 11:22:00 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:11:18.923 11:22:00 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:11:18.923 11:22:00 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:11:18.923 11:22:00 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:11:18.923 11:22:00 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:18.923 11:22:00 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:11:18.923 11:22:00 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:11:18.923 11:22:00 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:18.923 11:22:00 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:18.923 11:22:00 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:11:18.923 11:22:00 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:11:18.923 11:22:00 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:18.923 11:22:00 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:11:18.923 11:22:00 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:11:18.923 11:22:00 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:11:18.923 11:22:00 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:11:18.923 11:22:00 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:18.923 11:22:00 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:11:18.923 11:22:00 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:11:18.923 11:22:00 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:18.923 11:22:00 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:18.923 11:22:00 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:11:18.923 11:22:00 blockdev_nvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:18.923 11:22:00 blockdev_nvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:18.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.923 --rc genhtml_branch_coverage=1 00:11:18.923 --rc genhtml_function_coverage=1 00:11:18.923 --rc genhtml_legend=1 00:11:18.923 --rc geninfo_all_blocks=1 00:11:18.924 --rc geninfo_unexecuted_blocks=1 00:11:18.924 00:11:18.924 ' 00:11:18.924 11:22:00 blockdev_nvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:18.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.924 --rc genhtml_branch_coverage=1 00:11:18.924 --rc genhtml_function_coverage=1 00:11:18.924 --rc genhtml_legend=1 00:11:18.924 --rc geninfo_all_blocks=1 00:11:18.924 --rc geninfo_unexecuted_blocks=1 00:11:18.924 00:11:18.924 ' 00:11:18.924 11:22:00 blockdev_nvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:18.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.924 --rc genhtml_branch_coverage=1 00:11:18.924 --rc genhtml_function_coverage=1 00:11:18.924 --rc genhtml_legend=1 00:11:18.924 --rc geninfo_all_blocks=1 00:11:18.924 --rc geninfo_unexecuted_blocks=1 00:11:18.924 00:11:18.924 ' 00:11:18.924 11:22:00 blockdev_nvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:18.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.924 --rc genhtml_branch_coverage=1 00:11:18.924 --rc genhtml_function_coverage=1 00:11:18.924 --rc genhtml_legend=1 00:11:18.924 --rc geninfo_all_blocks=1 00:11:18.924 --rc geninfo_unexecuted_blocks=1 00:11:18.924 00:11:18.924 ' 00:11:18.924 11:22:00 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:11:18.924 11:22:00 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:11:18.924 11:22:00 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:11:18.924 11:22:00 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:18.924 11:22:00 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:11:18.924 11:22:00 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:11:18.924 11:22:00 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:11:18.924 11:22:00 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:11:18.924 11:22:00 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:11:18.924 11:22:00 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:11:18.924 11:22:00 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:11:18.924 11:22:00 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:11:18.924 11:22:00 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:11:18.924 11:22:00 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:11:18.924 11:22:00 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:11:18.924 11:22:00 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:11:18.924 11:22:00 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:11:18.924 11:22:00 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:11:18.924 11:22:00 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:11:18.924 11:22:00 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:11:18.924 11:22:00 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:11:18.924 11:22:00 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:11:18.924 11:22:00 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:11:18.924 11:22:00 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:11:18.924 11:22:00 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61568 00:11:18.924 11:22:00 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:18.924 11:22:00 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:11:18.924 11:22:00 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 61568 00:11:18.924 11:22:00 blockdev_nvme -- common/autotest_common.sh@831 -- # '[' -z 61568 ']' 00:11:18.924 11:22:00 blockdev_nvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:18.924 11:22:00 blockdev_nvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:18.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:18.924 11:22:00 blockdev_nvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:18.924 11:22:00 blockdev_nvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:18.924 11:22:00 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:19.183 [2024-10-07 11:22:00.633386] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:11:19.183 [2024-10-07 11:22:00.633524] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61568 ] 00:11:19.183 [2024-10-07 11:22:00.805531] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:19.442 [2024-10-07 11:22:01.022914] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.375 11:22:01 blockdev_nvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:20.375 11:22:01 blockdev_nvme -- common/autotest_common.sh@864 -- # return 0 00:11:20.375 11:22:01 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:11:20.375 11:22:01 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:11:20.375 11:22:01 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:11:20.375 11:22:01 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:11:20.375 11:22:01 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:20.375 11:22:02 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:11:20.375 11:22:02 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.375 11:22:02 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:20.634 11:22:02 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.634 11:22:02 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:11:20.634 11:22:02 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.634 11:22:02 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:20.907 11:22:02 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.908 11:22:02 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:11:20.908 11:22:02 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:11:20.908 11:22:02 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.908 11:22:02 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:20.908 11:22:02 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.908 11:22:02 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:11:20.908 11:22:02 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.908 11:22:02 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:20.908 11:22:02 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.908 11:22:02 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:11:20.908 11:22:02 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.908 11:22:02 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:20.908 11:22:02 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.908 11:22:02 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:11:20.908 11:22:02 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:11:20.908 11:22:02 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:11:20.908 11:22:02 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.908 11:22:02 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:20.908 11:22:02 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.908 11:22:02 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:11:20.908 11:22:02 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:11:20.909 11:22:02 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "9cf2700c-59a2-47b1-8bc0-1acdfeaf62cf"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "9cf2700c-59a2-47b1-8bc0-1acdfeaf62cf",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "43ef2ffd-0b04-4f86-90c1-b625af83b973"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "43ef2ffd-0b04-4f86-90c1-b625af83b973",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "c1b78ba3-c048-4845-b7e9-5224fa060ff9"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c1b78ba3-c048-4845-b7e9-5224fa060ff9",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "0caa4774-cab9-491c-bc46-580f8acf566f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "0caa4774-cab9-491c-bc46-580f8acf566f",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "e032cdf1-66af-4300-8067-cef4ba88b11f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e032cdf1-66af-4300-8067-cef4ba88b11f",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "a9f3c074-36f7-497d-8f17-25cffc9141e9"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "a9f3c074-36f7-497d-8f17-25cffc9141e9",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:11:20.909 11:22:02 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:11:20.909 11:22:02 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:11:20.909 11:22:02 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:11:20.909 11:22:02 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 61568 00:11:20.909 11:22:02 blockdev_nvme -- common/autotest_common.sh@950 -- # '[' -z 61568 ']' 00:11:20.909 11:22:02 blockdev_nvme -- common/autotest_common.sh@954 -- # kill -0 61568 00:11:20.909 11:22:02 blockdev_nvme -- common/autotest_common.sh@955 -- # uname 00:11:20.909 11:22:02 blockdev_nvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:20.909 11:22:02 blockdev_nvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61568 00:11:21.182 11:22:02 blockdev_nvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:21.182 11:22:02 blockdev_nvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:21.182 11:22:02 blockdev_nvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61568' 00:11:21.182 killing process with pid 61568 00:11:21.182 11:22:02 blockdev_nvme -- common/autotest_common.sh@969 -- # kill 61568 00:11:21.182 11:22:02 blockdev_nvme -- common/autotest_common.sh@974 -- # wait 61568 00:11:23.721 11:22:05 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:11:23.721 11:22:05 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:11:23.721 11:22:05 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:11:23.721 11:22:05 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:23.721 11:22:05 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:23.721 ************************************ 00:11:23.721 START TEST bdev_hello_world 00:11:23.721 ************************************ 00:11:23.721 11:22:05 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:11:23.980 [2024-10-07 11:22:05.505962] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:11:23.980 [2024-10-07 11:22:05.506122] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61674 ] 00:11:23.980 [2024-10-07 11:22:05.682528] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:24.239 [2024-10-07 11:22:05.923329] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.251 [2024-10-07 11:22:06.615967] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:11:25.251 [2024-10-07 11:22:06.616025] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:11:25.251 [2024-10-07 11:22:06.616050] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:11:25.251 [2024-10-07 11:22:06.619331] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:11:25.251 [2024-10-07 11:22:06.619830] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:11:25.251 [2024-10-07 11:22:06.619861] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:11:25.251 [2024-10-07 11:22:06.620082] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:11:25.251 00:11:25.251 [2024-10-07 11:22:06.620106] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:11:26.629 00:11:26.629 real 0m2.567s 00:11:26.629 user 0m2.173s 00:11:26.629 sys 0m0.284s 00:11:26.629 11:22:07 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:26.629 11:22:07 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:11:26.629 ************************************ 00:11:26.629 END TEST bdev_hello_world 00:11:26.629 ************************************ 00:11:26.630 11:22:08 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:11:26.630 11:22:08 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:26.630 11:22:08 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:26.630 11:22:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:26.630 ************************************ 00:11:26.630 START TEST bdev_bounds 00:11:26.630 ************************************ 00:11:26.630 11:22:08 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:11:26.630 11:22:08 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61721 00:11:26.630 11:22:08 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:11:26.630 11:22:08 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:11:26.630 11:22:08 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61721' 00:11:26.630 Process bdevio pid: 61721 00:11:26.630 11:22:08 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61721 00:11:26.630 11:22:08 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 61721 ']' 00:11:26.630 11:22:08 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:26.630 11:22:08 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:26.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:26.630 11:22:08 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:26.630 11:22:08 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:26.630 11:22:08 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:11:26.630 [2024-10-07 11:22:08.143272] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:11:26.630 [2024-10-07 11:22:08.143418] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61721 ] 00:11:26.630 [2024-10-07 11:22:08.311292] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:26.887 [2024-10-07 11:22:08.573368] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:11:26.887 [2024-10-07 11:22:08.573459] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.887 [2024-10-07 11:22:08.573485] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:11:27.820 11:22:09 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:27.820 11:22:09 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:11:27.820 11:22:09 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:11:27.820 I/O targets: 00:11:27.820 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:11:27.820 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:11:27.820 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:11:27.820 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:11:27.821 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:11:27.821 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:11:27.821 00:11:27.821 00:11:27.821 CUnit - A unit testing framework for C - Version 2.1-3 00:11:27.821 http://cunit.sourceforge.net/ 00:11:27.821 00:11:27.821 00:11:27.821 Suite: bdevio tests on: Nvme3n1 00:11:27.821 Test: blockdev write read block ...passed 00:11:27.821 Test: blockdev write zeroes read block ...passed 00:11:27.821 Test: blockdev write zeroes read no split ...passed 00:11:27.821 Test: blockdev write zeroes read split ...passed 00:11:28.079 Test: blockdev write zeroes read split partial ...passed 00:11:28.079 Test: blockdev reset ...[2024-10-07 11:22:09.575754] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:11:28.079 [2024-10-07 11:22:09.580780] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:28.079 passed 00:11:28.079 Test: blockdev write read 8 blocks ...passed 00:11:28.079 Test: blockdev write read size > 128k ...passed 00:11:28.079 Test: blockdev write read invalid size ...passed 00:11:28.079 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:28.079 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:28.079 Test: blockdev write read max offset ...passed 00:11:28.079 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:28.079 Test: blockdev writev readv 8 blocks ...passed 00:11:28.079 Test: blockdev writev readv 30 x 1block ...passed 00:11:28.079 Test: blockdev writev readv block ...passed 00:11:28.079 Test: blockdev writev readv size > 128k ...passed 00:11:28.079 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:28.079 Test: blockdev comparev and writev ...[2024-10-07 11:22:09.593894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b0e0a000 len:0x1000 00:11:28.079 [2024-10-07 11:22:09.593987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:28.079 passed 00:11:28.079 Test: blockdev nvme passthru rw ...passed 00:11:28.079 Test: blockdev nvme passthru vendor specific ...[2024-10-07 11:22:09.595128] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:11:28.079 [2024-10-07 11:22:09.595170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:11:28.079 passed 00:11:28.079 Test: blockdev nvme admin passthru ...passed 00:11:28.079 Test: blockdev copy ...passed 00:11:28.079 Suite: bdevio tests on: Nvme2n3 00:11:28.079 Test: blockdev write read block ...passed 00:11:28.079 Test: blockdev write zeroes read block ...passed 00:11:28.079 Test: blockdev write zeroes read no split ...passed 00:11:28.079 Test: blockdev write zeroes read split ...passed 00:11:28.079 Test: blockdev write zeroes read split partial ...passed 00:11:28.079 Test: blockdev reset ...[2024-10-07 11:22:09.685198] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:11:28.079 [2024-10-07 11:22:09.690681] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:28.079 passed 00:11:28.079 Test: blockdev write read 8 blocks ...passed 00:11:28.079 Test: blockdev write read size > 128k ...passed 00:11:28.079 Test: blockdev write read invalid size ...passed 00:11:28.079 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:28.079 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:28.079 Test: blockdev write read max offset ...passed 00:11:28.079 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:28.079 Test: blockdev writev readv 8 blocks ...passed 00:11:28.079 Test: blockdev writev readv 30 x 1block ...passed 00:11:28.079 Test: blockdev writev readv block ...passed 00:11:28.079 Test: blockdev writev readv size > 128k ...passed 00:11:28.079 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:28.079 Test: blockdev comparev and writev ...[2024-10-07 11:22:09.699173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x294e04000 len:0x1000 00:11:28.079 [2024-10-07 11:22:09.699260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:28.079 passed 00:11:28.079 Test: blockdev nvme passthru rw ...passed 00:11:28.079 Test: blockdev nvme passthru vendor specific ...passed 00:11:28.079 Test: blockdev nvme admin passthru ...[2024-10-07 11:22:09.700043] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:11:28.079 [2024-10-07 11:22:09.700080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:11:28.079 passed 00:11:28.079 Test: blockdev copy ...passed 00:11:28.079 Suite: bdevio tests on: Nvme2n2 00:11:28.079 Test: blockdev write read block ...passed 00:11:28.079 Test: blockdev write zeroes read block ...passed 00:11:28.079 Test: blockdev write zeroes read no split ...passed 00:11:28.079 Test: blockdev write zeroes read split ...passed 00:11:28.079 Test: blockdev write zeroes read split partial ...passed 00:11:28.079 Test: blockdev reset ...[2024-10-07 11:22:09.785868] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:11:28.341 [2024-10-07 11:22:09.791235] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:28.341 passed 00:11:28.341 Test: blockdev write read 8 blocks ...passed 00:11:28.341 Test: blockdev write read size > 128k ...passed 00:11:28.341 Test: blockdev write read invalid size ...passed 00:11:28.341 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:28.341 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:28.341 Test: blockdev write read max offset ...passed 00:11:28.341 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:28.341 Test: blockdev writev readv 8 blocks ...passed 00:11:28.341 Test: blockdev writev readv 30 x 1block ...passed 00:11:28.341 Test: blockdev writev readv block ...passed 00:11:28.341 Test: blockdev writev readv size > 128k ...passed 00:11:28.341 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:28.341 Test: blockdev comparev and writev ...[2024-10-07 11:22:09.803473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c5a3a000 len:0x1000 00:11:28.341 [2024-10-07 11:22:09.803554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:28.341 passed 00:11:28.341 Test: blockdev nvme passthru rw ...passed 00:11:28.341 Test: blockdev nvme passthru vendor specific ...passed 00:11:28.341 Test: blockdev nvme admin passthru ...[2024-10-07 11:22:09.804263] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:11:28.341 [2024-10-07 11:22:09.804300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:11:28.341 passed 00:11:28.341 Test: blockdev copy ...passed 00:11:28.341 Suite: bdevio tests on: Nvme2n1 00:11:28.341 Test: blockdev write read block ...passed 00:11:28.341 Test: blockdev write zeroes read block ...passed 00:11:28.341 Test: blockdev write zeroes read no split ...passed 00:11:28.341 Test: blockdev write zeroes read split ...passed 00:11:28.341 Test: blockdev write zeroes read split partial ...passed 00:11:28.341 Test: blockdev reset ...[2024-10-07 11:22:09.887390] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:11:28.341 [2024-10-07 11:22:09.892908] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:28.341 passed 00:11:28.341 Test: blockdev write read 8 blocks ...passed 00:11:28.341 Test: blockdev write read size > 128k ...passed 00:11:28.341 Test: blockdev write read invalid size ...passed 00:11:28.341 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:28.341 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:28.341 Test: blockdev write read max offset ...passed 00:11:28.341 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:28.341 Test: blockdev writev readv 8 blocks ...passed 00:11:28.341 Test: blockdev writev readv 30 x 1block ...passed 00:11:28.341 Test: blockdev writev readv block ...passed 00:11:28.341 Test: blockdev writev readv size > 128k ...passed 00:11:28.341 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:28.341 Test: blockdev comparev and writev ...[2024-10-07 11:22:09.907316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c5a34000 len:0x1000 00:11:28.341 [2024-10-07 11:22:09.907397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:28.341 passed 00:11:28.341 Test: blockdev nvme passthru rw ...passed 00:11:28.341 Test: blockdev nvme passthru vendor specific ...[2024-10-07 11:22:09.908379] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:11:28.341 [2024-10-07 11:22:09.908417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:11:28.341 passed 00:11:28.341 Test: blockdev nvme admin passthru ...passed 00:11:28.341 Test: blockdev copy ...passed 00:11:28.341 Suite: bdevio tests on: Nvme1n1 00:11:28.341 Test: blockdev write read block ...passed 00:11:28.341 Test: blockdev write zeroes read block ...passed 00:11:28.341 Test: blockdev write zeroes read no split ...passed 00:11:28.341 Test: blockdev write zeroes read split ...passed 00:11:28.341 Test: blockdev write zeroes read split partial ...passed 00:11:28.341 Test: blockdev reset ...[2024-10-07 11:22:10.016191] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:11:28.341 [2024-10-07 11:22:10.021149] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:28.341 passed 00:11:28.341 Test: blockdev write read 8 blocks ...passed 00:11:28.341 Test: blockdev write read size > 128k ...passed 00:11:28.341 Test: blockdev write read invalid size ...passed 00:11:28.341 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:28.341 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:28.341 Test: blockdev write read max offset ...passed 00:11:28.341 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:28.341 Test: blockdev writev readv 8 blocks ...passed 00:11:28.341 Test: blockdev writev readv 30 x 1block ...passed 00:11:28.341 Test: blockdev writev readv block ...passed 00:11:28.341 Test: blockdev writev readv size > 128k ...passed 00:11:28.341 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:28.341 Test: blockdev comparev and writev ...[2024-10-07 11:22:10.029948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c5a30000 len:0x1000 00:11:28.341 [2024-10-07 11:22:10.030037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:28.341 passed 00:11:28.341 Test: blockdev nvme passthru rw ...passed 00:11:28.341 Test: blockdev nvme passthru vendor specific ...passed 00:11:28.341 Test: blockdev nvme admin passthru ...[2024-10-07 11:22:10.031052] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:11:28.341 [2024-10-07 11:22:10.031098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:11:28.341 passed 00:11:28.341 Test: blockdev copy ...passed 00:11:28.341 Suite: bdevio tests on: Nvme0n1 00:11:28.341 Test: blockdev write read block ...passed 00:11:28.341 Test: blockdev write zeroes read block ...passed 00:11:28.341 Test: blockdev write zeroes read no split ...passed 00:11:28.605 Test: blockdev write zeroes read split ...passed 00:11:28.605 Test: blockdev write zeroes read split partial ...passed 00:11:28.605 Test: blockdev reset ...[2024-10-07 11:22:10.117974] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:11:28.605 [2024-10-07 11:22:10.122753] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:28.605 passed 00:11:28.605 Test: blockdev write read 8 blocks ...passed 00:11:28.605 Test: blockdev write read size > 128k ...passed 00:11:28.605 Test: blockdev write read invalid size ...passed 00:11:28.605 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:28.605 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:28.605 Test: blockdev write read max offset ...passed 00:11:28.605 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:28.605 Test: blockdev writev readv 8 blocks ...passed 00:11:28.605 Test: blockdev writev readv 30 x 1block ...passed 00:11:28.605 Test: blockdev writev readv block ...passed 00:11:28.605 Test: blockdev writev readv size > 128k ...passed 00:11:28.605 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:28.605 Test: blockdev comparev and writev ...[2024-10-07 11:22:10.130593] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:11:28.605 separate metadata which is not supported yet. 00:11:28.605 passed 00:11:28.605 Test: blockdev nvme passthru rw ...passed 00:11:28.605 Test: blockdev nvme passthru vendor specific ...[2024-10-07 11:22:10.131241] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:11:28.605 [2024-10-07 11:22:10.131348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:11:28.605 passed 00:11:28.605 Test: blockdev nvme admin passthru ...passed 00:11:28.605 Test: blockdev copy ...passed 00:11:28.605 00:11:28.605 Run Summary: Type Total Ran Passed Failed Inactive 00:11:28.605 suites 6 6 n/a 0 0 00:11:28.605 tests 138 138 138 0 0 00:11:28.605 asserts 893 893 893 0 n/a 00:11:28.605 00:11:28.605 Elapsed time = 1.789 seconds 00:11:28.605 0 00:11:28.605 11:22:10 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61721 00:11:28.605 11:22:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 61721 ']' 00:11:28.605 11:22:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 61721 00:11:28.605 11:22:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:11:28.605 11:22:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:28.605 11:22:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61721 00:11:28.605 11:22:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:28.605 11:22:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:28.605 killing process with pid 61721 00:11:28.605 11:22:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61721' 00:11:28.605 11:22:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 61721 00:11:28.605 11:22:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 61721 00:11:29.984 11:22:11 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:11:29.984 00:11:29.984 real 0m3.360s 00:11:29.984 user 0m8.367s 00:11:29.984 sys 0m0.466s 00:11:29.984 11:22:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:29.984 11:22:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:11:29.984 ************************************ 00:11:29.984 END TEST bdev_bounds 00:11:29.984 ************************************ 00:11:29.984 11:22:11 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:11:29.984 11:22:11 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:29.984 11:22:11 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:29.984 11:22:11 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:29.984 ************************************ 00:11:29.984 START TEST bdev_nbd 00:11:29.984 ************************************ 00:11:29.984 11:22:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:11:29.984 11:22:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:11:29.984 11:22:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:11:29.984 11:22:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:29.984 11:22:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:29.984 11:22:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:29.984 11:22:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:11:29.984 11:22:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:11:29.984 11:22:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:11:29.984 11:22:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:29.984 11:22:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:11:29.984 11:22:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:11:29.984 11:22:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:11:29.984 11:22:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:11:29.984 11:22:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:29.984 11:22:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:11:29.984 11:22:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61787 00:11:29.984 11:22:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:11:29.984 11:22:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:11:29.984 11:22:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61787 /var/tmp/spdk-nbd.sock 00:11:29.984 11:22:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 61787 ']' 00:11:29.984 11:22:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:29.984 11:22:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:29.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:29.984 11:22:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:29.984 11:22:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:29.984 11:22:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:11:29.984 [2024-10-07 11:22:11.591981] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:11:29.984 [2024-10-07 11:22:11.592134] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:30.242 [2024-10-07 11:22:11.761785] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:30.500 [2024-10-07 11:22:12.023602] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.069 11:22:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:31.069 11:22:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:11:31.069 11:22:12 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:11:31.069 11:22:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:31.069 11:22:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:31.069 11:22:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:11:31.069 11:22:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:11:31.069 11:22:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:31.069 11:22:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:31.069 11:22:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:11:31.069 11:22:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:11:31.069 11:22:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:11:31.069 11:22:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:11:31.069 11:22:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:11:31.069 11:22:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:11:31.328 11:22:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:11:31.328 11:22:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:11:31.328 11:22:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:11:31.328 11:22:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:31.328 11:22:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:31.328 11:22:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:31.328 11:22:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:31.328 11:22:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:31.328 11:22:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:31.328 11:22:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:31.328 11:22:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:31.328 11:22:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:31.328 1+0 records in 00:11:31.328 1+0 records out 00:11:31.328 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000553099 s, 7.4 MB/s 00:11:31.328 11:22:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:31.328 11:22:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:31.328 11:22:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:31.328 11:22:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:31.328 11:22:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:31.329 11:22:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:31.329 11:22:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:11:31.329 11:22:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:11:31.587 11:22:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:11:31.587 11:22:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:11:31.587 11:22:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:11:31.587 11:22:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:11:31.587 11:22:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:31.587 11:22:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:31.587 11:22:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:31.587 11:22:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:11:31.587 11:22:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:31.587 11:22:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:31.587 11:22:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:31.587 11:22:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:31.587 1+0 records in 00:11:31.587 1+0 records out 00:11:31.587 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000701624 s, 5.8 MB/s 00:11:31.587 11:22:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:31.878 11:22:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:31.878 11:22:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:31.878 11:22:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:31.878 11:22:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:31.878 11:22:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:31.878 11:22:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:11:31.878 11:22:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:11:31.878 11:22:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:11:32.138 11:22:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:11:32.138 11:22:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:11:32.138 11:22:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:11:32.138 11:22:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:32.138 11:22:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:32.138 11:22:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:32.138 11:22:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:11:32.138 11:22:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:32.138 11:22:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:32.138 11:22:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:32.138 11:22:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:32.138 1+0 records in 00:11:32.138 1+0 records out 00:11:32.138 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000516954 s, 7.9 MB/s 00:11:32.138 11:22:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:32.138 11:22:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:32.138 11:22:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:32.138 11:22:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:32.138 11:22:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:32.138 11:22:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:32.138 11:22:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:11:32.138 11:22:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:11:32.138 11:22:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:11:32.138 11:22:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:11:32.138 11:22:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:11:32.138 11:22:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:11:32.138 11:22:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:32.138 11:22:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:32.138 11:22:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:32.138 11:22:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:11:32.397 11:22:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:32.397 11:22:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:32.397 11:22:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:32.397 11:22:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:32.397 1+0 records in 00:11:32.397 1+0 records out 00:11:32.397 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000627489 s, 6.5 MB/s 00:11:32.397 11:22:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:32.397 11:22:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:32.397 11:22:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:32.397 11:22:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:32.397 11:22:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:32.397 11:22:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:32.397 11:22:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:11:32.397 11:22:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:11:32.397 11:22:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:11:32.397 11:22:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:11:32.655 11:22:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:11:32.655 11:22:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:11:32.655 11:22:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:32.655 11:22:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:32.655 11:22:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:32.655 11:22:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:11:32.655 11:22:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:32.655 11:22:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:32.655 11:22:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:32.655 11:22:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:32.655 1+0 records in 00:11:32.655 1+0 records out 00:11:32.655 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000525296 s, 7.8 MB/s 00:11:32.655 11:22:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:32.655 11:22:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:32.655 11:22:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:32.655 11:22:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:32.655 11:22:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:32.655 11:22:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:32.655 11:22:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:11:32.655 11:22:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:11:32.914 11:22:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:11:32.914 11:22:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:11:32.914 11:22:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:11:32.914 11:22:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:11:32.914 11:22:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:32.914 11:22:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:32.914 11:22:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:32.914 11:22:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:11:32.914 11:22:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:32.914 11:22:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:32.914 11:22:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:32.914 11:22:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:32.914 1+0 records in 00:11:32.914 1+0 records out 00:11:32.914 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000817715 s, 5.0 MB/s 00:11:32.914 11:22:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:32.914 11:22:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:32.914 11:22:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:32.914 11:22:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:32.914 11:22:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:32.914 11:22:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:32.914 11:22:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:11:32.914 11:22:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:33.173 11:22:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:11:33.173 { 00:11:33.173 "nbd_device": "/dev/nbd0", 00:11:33.173 "bdev_name": "Nvme0n1" 00:11:33.173 }, 00:11:33.173 { 00:11:33.173 "nbd_device": "/dev/nbd1", 00:11:33.173 "bdev_name": "Nvme1n1" 00:11:33.173 }, 00:11:33.173 { 00:11:33.173 "nbd_device": "/dev/nbd2", 00:11:33.173 "bdev_name": "Nvme2n1" 00:11:33.173 }, 00:11:33.173 { 00:11:33.173 "nbd_device": "/dev/nbd3", 00:11:33.173 "bdev_name": "Nvme2n2" 00:11:33.173 }, 00:11:33.173 { 00:11:33.173 "nbd_device": "/dev/nbd4", 00:11:33.173 "bdev_name": "Nvme2n3" 00:11:33.173 }, 00:11:33.173 { 00:11:33.173 "nbd_device": "/dev/nbd5", 00:11:33.173 "bdev_name": "Nvme3n1" 00:11:33.173 } 00:11:33.173 ]' 00:11:33.173 11:22:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:11:33.173 11:22:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:11:33.173 11:22:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:11:33.173 { 00:11:33.173 "nbd_device": "/dev/nbd0", 00:11:33.173 "bdev_name": "Nvme0n1" 00:11:33.173 }, 00:11:33.173 { 00:11:33.173 "nbd_device": "/dev/nbd1", 00:11:33.173 "bdev_name": "Nvme1n1" 00:11:33.173 }, 00:11:33.173 { 00:11:33.173 "nbd_device": "/dev/nbd2", 00:11:33.173 "bdev_name": "Nvme2n1" 00:11:33.173 }, 00:11:33.173 { 00:11:33.173 "nbd_device": "/dev/nbd3", 00:11:33.173 "bdev_name": "Nvme2n2" 00:11:33.173 }, 00:11:33.173 { 00:11:33.173 "nbd_device": "/dev/nbd4", 00:11:33.173 "bdev_name": "Nvme2n3" 00:11:33.173 }, 00:11:33.173 { 00:11:33.173 "nbd_device": "/dev/nbd5", 00:11:33.173 "bdev_name": "Nvme3n1" 00:11:33.173 } 00:11:33.173 ]' 00:11:33.173 11:22:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:11:33.173 11:22:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:33.173 11:22:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:11:33.173 11:22:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:33.173 11:22:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:11:33.173 11:22:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:33.173 11:22:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:33.432 11:22:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:33.432 11:22:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:33.432 11:22:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:33.432 11:22:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:33.432 11:22:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:33.432 11:22:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:33.432 11:22:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:33.432 11:22:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:33.432 11:22:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:33.432 11:22:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:33.692 11:22:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:33.692 11:22:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:33.692 11:22:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:33.692 11:22:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:33.692 11:22:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:33.692 11:22:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:33.692 11:22:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:33.692 11:22:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:33.692 11:22:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:33.692 11:22:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:11:33.951 11:22:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:11:33.951 11:22:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:11:33.951 11:22:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:11:33.951 11:22:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:33.951 11:22:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:33.951 11:22:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:11:33.951 11:22:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:33.951 11:22:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:33.951 11:22:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:33.951 11:22:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:11:34.211 11:22:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:11:34.211 11:22:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:11:34.211 11:22:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:11:34.211 11:22:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:34.211 11:22:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:34.211 11:22:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:11:34.211 11:22:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:34.211 11:22:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:34.211 11:22:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:34.211 11:22:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:11:34.211 11:22:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:11:34.211 11:22:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:11:34.211 11:22:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:11:34.211 11:22:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:34.211 11:22:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:34.211 11:22:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:11:34.211 11:22:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:34.211 11:22:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:34.211 11:22:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:34.211 11:22:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:11:34.780 11:22:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:11:34.780 11:22:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:11:34.780 11:22:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:11:34.780 11:22:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:34.780 11:22:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:34.780 11:22:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:11:34.780 11:22:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:34.780 11:22:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:34.780 11:22:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:34.780 11:22:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:34.780 11:22:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:34.780 11:22:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:34.780 11:22:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:34.780 11:22:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:34.780 11:22:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:34.780 11:22:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:11:34.780 11:22:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:34.780 11:22:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:11:34.780 11:22:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:11:34.780 11:22:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:11:34.780 11:22:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:11:34.780 11:22:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:11:34.780 11:22:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:11:34.780 11:22:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:11:34.780 11:22:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:34.780 11:22:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:34.780 11:22:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:34.780 11:22:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:11:34.780 11:22:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:34.780 11:22:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:11:35.039 11:22:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:35.039 11:22:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:35.039 11:22:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:35.039 11:22:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:11:35.039 11:22:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:35.039 11:22:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:11:35.039 11:22:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:35.039 11:22:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:11:35.039 11:22:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:11:35.039 /dev/nbd0 00:11:35.039 11:22:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:35.039 11:22:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:35.039 11:22:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:35.039 11:22:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:35.039 11:22:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:35.039 11:22:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:35.039 11:22:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:35.039 11:22:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:35.039 11:22:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:35.039 11:22:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:35.039 11:22:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:35.299 1+0 records in 00:11:35.299 1+0 records out 00:11:35.299 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000727998 s, 5.6 MB/s 00:11:35.299 11:22:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:35.299 11:22:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:35.299 11:22:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:35.299 11:22:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:35.299 11:22:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:35.299 11:22:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:35.299 11:22:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:11:35.299 11:22:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:11:35.299 /dev/nbd1 00:11:35.299 11:22:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:35.299 11:22:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:35.299 11:22:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:11:35.299 11:22:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:35.299 11:22:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:35.560 11:22:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:35.560 11:22:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:11:35.560 11:22:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:35.560 11:22:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:35.560 11:22:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:35.560 11:22:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:35.560 1+0 records in 00:11:35.560 1+0 records out 00:11:35.560 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000661687 s, 6.2 MB/s 00:11:35.560 11:22:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:35.560 11:22:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:35.560 11:22:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:35.560 11:22:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:35.560 11:22:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:35.560 11:22:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:35.560 11:22:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:11:35.560 11:22:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:11:35.560 /dev/nbd10 00:11:35.826 11:22:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:11:35.826 11:22:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:11:35.826 11:22:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:11:35.826 11:22:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:35.826 11:22:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:35.826 11:22:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:35.826 11:22:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:11:35.826 11:22:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:35.826 11:22:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:35.826 11:22:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:35.826 11:22:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:35.826 1+0 records in 00:11:35.826 1+0 records out 00:11:35.826 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000763057 s, 5.4 MB/s 00:11:35.826 11:22:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:35.826 11:22:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:35.826 11:22:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:35.826 11:22:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:35.826 11:22:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:35.826 11:22:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:35.826 11:22:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:11:35.826 11:22:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:11:35.826 /dev/nbd11 00:11:36.085 11:22:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:11:36.085 11:22:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:11:36.085 11:22:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:11:36.085 11:22:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:36.085 11:22:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:36.085 11:22:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:36.085 11:22:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:11:36.085 11:22:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:36.085 11:22:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:36.085 11:22:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:36.085 11:22:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:36.085 1+0 records in 00:11:36.085 1+0 records out 00:11:36.085 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000811385 s, 5.0 MB/s 00:11:36.085 11:22:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:36.085 11:22:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:36.085 11:22:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:36.085 11:22:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:36.085 11:22:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:36.085 11:22:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:36.085 11:22:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:11:36.085 11:22:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:11:36.343 /dev/nbd12 00:11:36.343 11:22:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:11:36.343 11:22:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:11:36.343 11:22:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:11:36.343 11:22:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:36.343 11:22:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:36.343 11:22:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:36.343 11:22:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:11:36.343 11:22:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:36.343 11:22:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:36.343 11:22:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:36.343 11:22:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:36.343 1+0 records in 00:11:36.343 1+0 records out 00:11:36.343 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000676224 s, 6.1 MB/s 00:11:36.343 11:22:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:36.343 11:22:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:36.343 11:22:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:36.343 11:22:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:36.343 11:22:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:36.343 11:22:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:36.343 11:22:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:11:36.343 11:22:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:11:36.602 /dev/nbd13 00:11:36.602 11:22:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:11:36.602 11:22:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:11:36.602 11:22:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:11:36.602 11:22:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:36.602 11:22:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:36.602 11:22:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:36.602 11:22:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:11:36.602 11:22:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:36.602 11:22:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:36.602 11:22:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:36.602 11:22:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:36.602 1+0 records in 00:11:36.602 1+0 records out 00:11:36.602 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000683608 s, 6.0 MB/s 00:11:36.602 11:22:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:36.602 11:22:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:36.602 11:22:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:36.602 11:22:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:36.602 11:22:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:36.602 11:22:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:36.602 11:22:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:11:36.602 11:22:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:36.602 11:22:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:36.602 11:22:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:36.864 11:22:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:36.864 { 00:11:36.864 "nbd_device": "/dev/nbd0", 00:11:36.864 "bdev_name": "Nvme0n1" 00:11:36.864 }, 00:11:36.864 { 00:11:36.864 "nbd_device": "/dev/nbd1", 00:11:36.864 "bdev_name": "Nvme1n1" 00:11:36.864 }, 00:11:36.864 { 00:11:36.864 "nbd_device": "/dev/nbd10", 00:11:36.864 "bdev_name": "Nvme2n1" 00:11:36.864 }, 00:11:36.864 { 00:11:36.864 "nbd_device": "/dev/nbd11", 00:11:36.864 "bdev_name": "Nvme2n2" 00:11:36.864 }, 00:11:36.864 { 00:11:36.864 "nbd_device": "/dev/nbd12", 00:11:36.864 "bdev_name": "Nvme2n3" 00:11:36.864 }, 00:11:36.864 { 00:11:36.864 "nbd_device": "/dev/nbd13", 00:11:36.864 "bdev_name": "Nvme3n1" 00:11:36.864 } 00:11:36.864 ]' 00:11:36.864 11:22:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:36.864 11:22:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:36.864 { 00:11:36.864 "nbd_device": "/dev/nbd0", 00:11:36.864 "bdev_name": "Nvme0n1" 00:11:36.864 }, 00:11:36.864 { 00:11:36.864 "nbd_device": "/dev/nbd1", 00:11:36.864 "bdev_name": "Nvme1n1" 00:11:36.864 }, 00:11:36.864 { 00:11:36.864 "nbd_device": "/dev/nbd10", 00:11:36.864 "bdev_name": "Nvme2n1" 00:11:36.864 }, 00:11:36.864 { 00:11:36.864 "nbd_device": "/dev/nbd11", 00:11:36.864 "bdev_name": "Nvme2n2" 00:11:36.864 }, 00:11:36.864 { 00:11:36.864 "nbd_device": "/dev/nbd12", 00:11:36.864 "bdev_name": "Nvme2n3" 00:11:36.864 }, 00:11:36.864 { 00:11:36.864 "nbd_device": "/dev/nbd13", 00:11:36.864 "bdev_name": "Nvme3n1" 00:11:36.864 } 00:11:36.864 ]' 00:11:36.864 11:22:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:36.864 /dev/nbd1 00:11:36.864 /dev/nbd10 00:11:36.864 /dev/nbd11 00:11:36.864 /dev/nbd12 00:11:36.864 /dev/nbd13' 00:11:36.864 11:22:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:36.864 /dev/nbd1 00:11:36.864 /dev/nbd10 00:11:36.864 /dev/nbd11 00:11:36.864 /dev/nbd12 00:11:36.864 /dev/nbd13' 00:11:36.864 11:22:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:36.864 11:22:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:11:36.864 11:22:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:11:36.864 11:22:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:11:36.864 11:22:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:11:36.864 11:22:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:11:36.864 11:22:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:11:36.864 11:22:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:36.864 11:22:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:36.864 11:22:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:36.864 11:22:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:36.864 11:22:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:11:36.864 256+0 records in 00:11:36.864 256+0 records out 00:11:36.864 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00596828 s, 176 MB/s 00:11:36.864 11:22:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:36.864 11:22:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:36.864 256+0 records in 00:11:36.864 256+0 records out 00:11:36.864 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.115574 s, 9.1 MB/s 00:11:36.864 11:22:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:36.864 11:22:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:37.122 256+0 records in 00:11:37.122 256+0 records out 00:11:37.122 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.12288 s, 8.5 MB/s 00:11:37.122 11:22:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:37.122 11:22:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:11:37.122 256+0 records in 00:11:37.122 256+0 records out 00:11:37.122 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.122499 s, 8.6 MB/s 00:11:37.122 11:22:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:37.122 11:22:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:11:37.381 256+0 records in 00:11:37.381 256+0 records out 00:11:37.381 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.118946 s, 8.8 MB/s 00:11:37.381 11:22:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:37.381 11:22:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:11:37.381 256+0 records in 00:11:37.381 256+0 records out 00:11:37.381 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.1233 s, 8.5 MB/s 00:11:37.381 11:22:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:37.381 11:22:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:11:37.640 256+0 records in 00:11:37.640 256+0 records out 00:11:37.640 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.118035 s, 8.9 MB/s 00:11:37.640 11:22:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:11:37.640 11:22:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:11:37.640 11:22:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:37.640 11:22:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:37.640 11:22:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:37.640 11:22:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:37.640 11:22:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:37.640 11:22:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:37.640 11:22:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:11:37.640 11:22:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:37.640 11:22:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:11:37.640 11:22:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:37.640 11:22:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:11:37.640 11:22:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:37.640 11:22:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:11:37.640 11:22:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:37.641 11:22:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:11:37.641 11:22:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:37.641 11:22:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:11:37.641 11:22:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:37.641 11:22:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:11:37.641 11:22:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:37.641 11:22:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:11:37.641 11:22:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:37.641 11:22:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:11:37.641 11:22:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:37.641 11:22:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:37.899 11:22:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:37.899 11:22:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:37.899 11:22:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:37.899 11:22:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:37.900 11:22:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:37.900 11:22:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:37.900 11:22:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:37.900 11:22:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:37.900 11:22:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:37.900 11:22:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:38.158 11:22:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:38.158 11:22:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:38.158 11:22:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:38.158 11:22:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:38.158 11:22:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:38.158 11:22:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:38.158 11:22:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:38.158 11:22:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:38.158 11:22:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:38.158 11:22:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:11:38.438 11:22:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:11:38.438 11:22:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:11:38.438 11:22:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:11:38.438 11:22:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:38.438 11:22:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:38.438 11:22:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:11:38.438 11:22:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:38.438 11:22:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:38.438 11:22:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:38.438 11:22:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:11:38.705 11:22:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:11:38.705 11:22:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:11:38.705 11:22:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:11:38.705 11:22:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:38.705 11:22:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:38.705 11:22:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:11:38.705 11:22:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:38.705 11:22:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:38.705 11:22:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:38.705 11:22:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:11:38.990 11:22:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:11:38.990 11:22:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:11:38.990 11:22:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:11:38.990 11:22:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:38.990 11:22:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:38.990 11:22:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:11:38.990 11:22:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:38.990 11:22:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:38.990 11:22:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:38.990 11:22:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:11:39.249 11:22:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:11:39.249 11:22:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:11:39.249 11:22:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:11:39.249 11:22:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:39.249 11:22:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:39.249 11:22:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:11:39.249 11:22:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:39.249 11:22:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:39.250 11:22:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:39.250 11:22:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:39.250 11:22:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:39.508 11:22:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:39.508 11:22:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:39.508 11:22:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:39.508 11:22:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:39.508 11:22:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:11:39.508 11:22:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:39.508 11:22:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:11:39.508 11:22:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:11:39.508 11:22:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:11:39.508 11:22:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:11:39.508 11:22:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:39.508 11:22:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:11:39.509 11:22:21 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:11:39.509 11:22:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:39.509 11:22:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:11:39.509 11:22:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:11:39.768 malloc_lvol_verify 00:11:39.768 11:22:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:11:40.026 f1f9abbf-c0fa-4c94-923a-075da9fe0b62 00:11:40.026 11:22:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:11:40.026 1ca58d06-52ab-43d9-a707-9b063193bf50 00:11:40.026 11:22:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:11:40.284 /dev/nbd0 00:11:40.284 11:22:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:11:40.284 11:22:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:11:40.284 11:22:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:11:40.284 11:22:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:11:40.284 11:22:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:11:40.542 mke2fs 1.47.0 (5-Feb-2023) 00:11:40.542 Discarding device blocks: 0/4096 done 00:11:40.542 Creating filesystem with 4096 1k blocks and 1024 inodes 00:11:40.542 00:11:40.542 Allocating group tables: 0/1 done 00:11:40.542 Writing inode tables: 0/1 done 00:11:40.542 Creating journal (1024 blocks): done 00:11:40.542 Writing superblocks and filesystem accounting information: 0/1 done 00:11:40.542 00:11:40.542 11:22:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:11:40.542 11:22:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:40.542 11:22:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:40.542 11:22:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:40.542 11:22:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:11:40.542 11:22:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:40.542 11:22:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:40.800 11:22:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:40.800 11:22:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:40.800 11:22:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:40.800 11:22:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:40.800 11:22:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:40.800 11:22:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:40.800 11:22:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:40.800 11:22:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:40.800 11:22:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61787 00:11:40.800 11:22:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 61787 ']' 00:11:40.800 11:22:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 61787 00:11:40.800 11:22:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:11:40.800 11:22:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:40.800 11:22:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61787 00:11:40.800 killing process with pid 61787 00:11:40.800 11:22:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:40.800 11:22:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:40.800 11:22:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61787' 00:11:40.800 11:22:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 61787 00:11:40.800 11:22:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 61787 00:11:42.705 ************************************ 00:11:42.705 END TEST bdev_nbd 00:11:42.705 ************************************ 00:11:42.705 11:22:24 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:11:42.705 00:11:42.705 real 0m12.531s 00:11:42.705 user 0m16.230s 00:11:42.705 sys 0m4.943s 00:11:42.705 11:22:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:42.705 11:22:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:11:42.705 11:22:24 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:11:42.705 11:22:24 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:11:42.705 skipping fio tests on NVMe due to multi-ns failures. 00:11:42.705 11:22:24 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:11:42.705 11:22:24 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:11:42.705 11:22:24 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:11:42.705 11:22:24 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:11:42.705 11:22:24 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:42.705 11:22:24 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:42.705 ************************************ 00:11:42.705 START TEST bdev_verify 00:11:42.705 ************************************ 00:11:42.705 11:22:24 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:11:42.705 [2024-10-07 11:22:24.174259] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:11:42.705 [2024-10-07 11:22:24.174387] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62186 ] 00:11:42.705 [2024-10-07 11:22:24.347294] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:42.965 [2024-10-07 11:22:24.568268] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.965 [2024-10-07 11:22:24.568319] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:11:43.900 Running I/O for 5 seconds... 00:11:45.774 20800.00 IOPS, 81.25 MiB/s [2024-10-07T11:22:28.862Z] 21184.00 IOPS, 82.75 MiB/s [2024-10-07T11:22:29.796Z] 21354.67 IOPS, 83.42 MiB/s [2024-10-07T11:22:30.733Z] 21616.00 IOPS, 84.44 MiB/s [2024-10-07T11:22:30.733Z] 21734.40 IOPS, 84.90 MiB/s 00:11:49.022 Latency(us) 00:11:49.022 [2024-10-07T11:22:30.733Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:49.022 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:49.022 Verification LBA range: start 0x0 length 0xbd0bd 00:11:49.022 Nvme0n1 : 5.04 1777.95 6.95 0.00 0.00 71816.45 15686.53 66957.26 00:11:49.022 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:49.022 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:11:49.022 Nvme0n1 : 5.05 1798.22 7.02 0.00 0.00 70731.89 15686.53 74537.33 00:11:49.022 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:49.022 Verification LBA range: start 0x0 length 0xa0000 00:11:49.022 Nvme1n1 : 5.04 1777.45 6.94 0.00 0.00 71746.53 17686.82 61903.88 00:11:49.022 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:49.022 Verification LBA range: start 0xa0000 length 0xa0000 00:11:49.022 Nvme1n1 : 5.06 1797.58 7.02 0.00 0.00 70641.25 14423.18 73695.10 00:11:49.022 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:49.022 Verification LBA range: start 0x0 length 0x80000 00:11:49.022 Nvme2n1 : 5.04 1776.91 6.94 0.00 0.00 71657.53 16949.87 63588.34 00:11:49.022 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:49.022 Verification LBA range: start 0x80000 length 0x80000 00:11:49.022 Nvme2n1 : 5.06 1796.41 7.02 0.00 0.00 70556.83 13212.48 73273.99 00:11:49.022 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:49.022 Verification LBA range: start 0x0 length 0x80000 00:11:49.022 Nvme2n2 : 5.04 1776.37 6.94 0.00 0.00 71576.81 16423.48 62746.11 00:11:49.022 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:49.022 Verification LBA range: start 0x80000 length 0x80000 00:11:49.022 Nvme2n2 : 5.07 1804.87 7.05 0.00 0.00 70163.53 4421.71 72852.87 00:11:49.022 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:49.022 Verification LBA range: start 0x0 length 0x80000 00:11:49.022 Nvme2n3 : 5.06 1784.96 6.97 0.00 0.00 71116.54 3816.35 64430.57 00:11:49.022 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:49.022 Verification LBA range: start 0x80000 length 0x80000 00:11:49.022 Nvme2n3 : 5.10 1808.21 7.06 0.00 0.00 70019.27 10054.12 72431.76 00:11:49.022 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:49.022 Verification LBA range: start 0x0 length 0x20000 00:11:49.022 Nvme3n1 : 5.07 1792.36 7.00 0.00 0.00 70741.65 9317.17 67799.49 00:11:49.022 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:49.022 Verification LBA range: start 0x20000 length 0x20000 00:11:49.022 Nvme3n1 : 5.05 1798.78 7.03 0.00 0.00 70976.59 16423.48 73273.99 00:11:49.022 [2024-10-07T11:22:30.733Z] =================================================================================================================== 00:11:49.022 [2024-10-07T11:22:30.733Z] Total : 21490.06 83.95 0.00 0.00 70973.65 3816.35 74537.33 00:11:50.397 00:11:50.397 real 0m7.949s 00:11:50.397 user 0m14.428s 00:11:50.397 sys 0m0.323s 00:11:50.397 11:22:32 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:50.397 11:22:32 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:11:50.397 ************************************ 00:11:50.397 END TEST bdev_verify 00:11:50.397 ************************************ 00:11:50.397 11:22:32 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:11:50.397 11:22:32 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:11:50.397 11:22:32 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:50.397 11:22:32 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:50.397 ************************************ 00:11:50.397 START TEST bdev_verify_big_io 00:11:50.397 ************************************ 00:11:50.397 11:22:32 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:11:50.655 [2024-10-07 11:22:32.167265] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:11:50.655 [2024-10-07 11:22:32.167409] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62295 ] 00:11:50.655 [2024-10-07 11:22:32.342079] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:50.914 [2024-10-07 11:22:32.582403] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.914 [2024-10-07 11:22:32.582405] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:11:51.844 Running I/O for 5 seconds... 00:11:55.744 987.00 IOPS, 61.69 MiB/s [2024-10-07T11:22:38.830Z] 1901.50 IOPS, 118.84 MiB/s [2024-10-07T11:22:39.398Z] 2483.00 IOPS, 155.19 MiB/s [2024-10-07T11:22:39.398Z] 2584.00 IOPS, 161.50 MiB/s 00:11:57.687 Latency(us) 00:11:57.687 [2024-10-07T11:22:39.398Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:57.687 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:57.687 Verification LBA range: start 0x0 length 0xbd0b 00:11:57.687 Nvme0n1 : 5.62 146.70 9.17 0.00 0.00 837314.97 24529.94 1219548.63 00:11:57.687 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:57.687 Verification LBA range: start 0xbd0b length 0xbd0b 00:11:57.687 Nvme0n1 : 5.50 157.14 9.82 0.00 0.00 795060.65 21897.97 781589.18 00:11:57.687 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:57.687 Verification LBA range: start 0x0 length 0xa000 00:11:57.687 Nvme1n1 : 5.62 150.73 9.42 0.00 0.00 804719.92 38321.45 1246499.98 00:11:57.687 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:57.687 Verification LBA range: start 0xa000 length 0xa000 00:11:57.687 Nvme1n1 : 5.57 156.27 9.77 0.00 0.00 773395.09 41269.26 724317.56 00:11:57.687 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:57.687 Verification LBA range: start 0x0 length 0x8000 00:11:57.687 Nvme2n1 : 5.62 157.75 9.86 0.00 0.00 747184.40 51797.13 663677.02 00:11:57.687 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:57.687 Verification LBA range: start 0x8000 length 0x8000 00:11:57.687 Nvme2n1 : 5.64 159.26 9.95 0.00 0.00 740317.81 72010.64 811909.45 00:11:57.687 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:57.687 Verification LBA range: start 0x0 length 0x8000 00:11:57.687 Nvme2n2 : 5.72 161.36 10.09 0.00 0.00 717850.73 23266.60 1293664.85 00:11:57.687 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:57.687 Verification LBA range: start 0x8000 length 0x8000 00:11:57.687 Nvme2n2 : 5.65 162.57 10.16 0.00 0.00 713515.82 69062.84 764744.58 00:11:57.687 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:57.687 Verification LBA range: start 0x0 length 0x8000 00:11:57.687 Nvme2n3 : 5.72 168.31 10.52 0.00 0.00 670356.75 15686.53 950035.12 00:11:57.687 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:57.687 Verification LBA range: start 0x8000 length 0x8000 00:11:57.687 Nvme2n3 : 5.71 175.34 10.96 0.00 0.00 651765.38 24003.55 778220.26 00:11:57.687 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:57.687 Verification LBA range: start 0x0 length 0x2000 00:11:57.687 Nvme3n1 : 5.76 186.91 11.68 0.00 0.00 591168.53 842.23 1374518.90 00:11:57.687 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:57.687 Verification LBA range: start 0x2000 length 0x2000 00:11:57.687 Nvme3n1 : 5.71 183.12 11.44 0.00 0.00 610166.70 1138.33 774851.34 00:11:57.687 [2024-10-07T11:22:39.398Z] =================================================================================================================== 00:11:57.687 [2024-10-07T11:22:39.398Z] Total : 1965.47 122.84 0.00 0.00 715007.23 842.23 1374518.90 00:11:59.592 00:11:59.592 real 0m9.130s 00:11:59.592 user 0m16.748s 00:11:59.592 sys 0m0.369s 00:11:59.592 11:22:41 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:59.592 11:22:41 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:11:59.592 ************************************ 00:11:59.592 END TEST bdev_verify_big_io 00:11:59.592 ************************************ 00:11:59.592 11:22:41 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:59.592 11:22:41 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:11:59.592 11:22:41 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:59.592 11:22:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:59.592 ************************************ 00:11:59.592 START TEST bdev_write_zeroes 00:11:59.592 ************************************ 00:11:59.592 11:22:41 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:59.851 [2024-10-07 11:22:41.384097] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:11:59.851 [2024-10-07 11:22:41.384221] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62410 ] 00:11:59.851 [2024-10-07 11:22:41.555430] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:00.110 [2024-10-07 11:22:41.775727] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:01.045 Running I/O for 1 seconds... 00:12:01.978 72192.00 IOPS, 282.00 MiB/s 00:12:01.978 Latency(us) 00:12:01.978 [2024-10-07T11:22:43.689Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:01.978 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:01.978 Nvme0n1 : 1.02 11985.37 46.82 0.00 0.00 10645.23 8264.38 32215.29 00:12:01.978 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:01.978 Nvme1n1 : 1.02 11970.89 46.76 0.00 0.00 10644.91 8632.85 33268.07 00:12:01.978 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:01.978 Nvme2n1 : 1.02 11958.31 46.71 0.00 0.00 10602.15 8317.02 30530.83 00:12:01.978 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:01.978 Nvme2n2 : 1.02 11996.13 46.86 0.00 0.00 10525.69 7106.31 24635.22 00:12:01.978 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:01.979 Nvme2n3 : 1.03 11984.77 46.82 0.00 0.00 10502.99 7001.03 22740.20 00:12:01.979 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:01.979 Nvme3n1 : 1.03 11973.08 46.77 0.00 0.00 10484.70 6790.48 21582.14 00:12:01.979 [2024-10-07T11:22:43.690Z] =================================================================================================================== 00:12:01.979 [2024-10-07T11:22:43.690Z] Total : 71868.56 280.74 0.00 0.00 10567.45 6790.48 33268.07 00:12:03.354 00:12:03.354 real 0m3.543s 00:12:03.354 user 0m3.139s 00:12:03.354 sys 0m0.286s 00:12:03.354 11:22:44 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:03.354 11:22:44 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:12:03.354 ************************************ 00:12:03.354 END TEST bdev_write_zeroes 00:12:03.354 ************************************ 00:12:03.354 11:22:44 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:03.355 11:22:44 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:12:03.355 11:22:44 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:03.355 11:22:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:03.355 ************************************ 00:12:03.355 START TEST bdev_json_nonenclosed 00:12:03.355 ************************************ 00:12:03.355 11:22:44 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:03.355 [2024-10-07 11:22:45.014003] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:12:03.355 [2024-10-07 11:22:45.014161] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62468 ] 00:12:03.613 [2024-10-07 11:22:45.190785] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:03.871 [2024-10-07 11:22:45.409660] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:03.871 [2024-10-07 11:22:45.409772] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:12:03.872 [2024-10-07 11:22:45.409797] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:12:03.872 [2024-10-07 11:22:45.409810] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:04.438 00:12:04.438 real 0m0.928s 00:12:04.438 user 0m0.651s 00:12:04.438 sys 0m0.171s 00:12:04.438 11:22:45 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:04.438 ************************************ 00:12:04.438 END TEST bdev_json_nonenclosed 00:12:04.438 ************************************ 00:12:04.438 11:22:45 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:12:04.438 11:22:45 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:04.438 11:22:45 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:12:04.438 11:22:45 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:04.438 11:22:45 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:04.438 ************************************ 00:12:04.438 START TEST bdev_json_nonarray 00:12:04.438 ************************************ 00:12:04.438 11:22:45 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:04.438 [2024-10-07 11:22:46.023326] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:12:04.438 [2024-10-07 11:22:46.023478] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62495 ] 00:12:04.696 [2024-10-07 11:22:46.202729] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:04.955 [2024-10-07 11:22:46.423921] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.955 [2024-10-07 11:22:46.424018] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:12:04.955 [2024-10-07 11:22:46.424042] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:12:04.955 [2024-10-07 11:22:46.424055] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:05.214 00:12:05.214 real 0m0.935s 00:12:05.214 user 0m0.662s 00:12:05.214 sys 0m0.167s 00:12:05.214 11:22:46 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:05.214 11:22:46 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:12:05.214 ************************************ 00:12:05.214 END TEST bdev_json_nonarray 00:12:05.214 ************************************ 00:12:05.214 11:22:46 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:12:05.214 11:22:46 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:12:05.214 11:22:46 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:12:05.214 11:22:46 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:12:05.214 11:22:46 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:12:05.214 11:22:46 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:12:05.214 11:22:46 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:05.473 11:22:46 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:12:05.473 11:22:46 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:12:05.473 11:22:46 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:12:05.473 11:22:46 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:12:05.473 00:12:05.473 real 0m46.666s 00:12:05.473 user 1m7.692s 00:12:05.473 sys 0m8.209s 00:12:05.473 11:22:46 blockdev_nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:05.473 11:22:46 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:05.473 ************************************ 00:12:05.473 END TEST blockdev_nvme 00:12:05.473 ************************************ 00:12:05.473 11:22:46 -- spdk/autotest.sh@209 -- # uname -s 00:12:05.473 11:22:46 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:12:05.473 11:22:46 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:12:05.473 11:22:46 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:05.473 11:22:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:05.473 11:22:46 -- common/autotest_common.sh@10 -- # set +x 00:12:05.473 ************************************ 00:12:05.473 START TEST blockdev_nvme_gpt 00:12:05.473 ************************************ 00:12:05.473 11:22:47 blockdev_nvme_gpt -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:12:05.473 * Looking for test storage... 00:12:05.473 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:12:05.473 11:22:47 blockdev_nvme_gpt -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:05.473 11:22:47 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # lcov --version 00:12:05.473 11:22:47 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:05.731 11:22:47 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:05.731 11:22:47 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:05.731 11:22:47 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:05.731 11:22:47 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:05.731 11:22:47 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:12:05.731 11:22:47 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:12:05.731 11:22:47 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:12:05.731 11:22:47 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:12:05.731 11:22:47 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:12:05.731 11:22:47 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:12:05.731 11:22:47 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:12:05.731 11:22:47 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:05.731 11:22:47 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:12:05.731 11:22:47 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:12:05.731 11:22:47 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:05.731 11:22:47 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:05.731 11:22:47 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:12:05.731 11:22:47 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:12:05.731 11:22:47 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:05.731 11:22:47 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:12:05.731 11:22:47 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:12:05.731 11:22:47 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:12:05.731 11:22:47 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:12:05.731 11:22:47 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:05.731 11:22:47 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:12:05.731 11:22:47 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:12:05.731 11:22:47 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:05.731 11:22:47 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:05.731 11:22:47 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:12:05.731 11:22:47 blockdev_nvme_gpt -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:05.731 11:22:47 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:05.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.731 --rc genhtml_branch_coverage=1 00:12:05.731 --rc genhtml_function_coverage=1 00:12:05.731 --rc genhtml_legend=1 00:12:05.731 --rc geninfo_all_blocks=1 00:12:05.731 --rc geninfo_unexecuted_blocks=1 00:12:05.731 00:12:05.731 ' 00:12:05.731 11:22:47 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:05.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.732 --rc genhtml_branch_coverage=1 00:12:05.732 --rc genhtml_function_coverage=1 00:12:05.732 --rc genhtml_legend=1 00:12:05.732 --rc geninfo_all_blocks=1 00:12:05.732 --rc geninfo_unexecuted_blocks=1 00:12:05.732 00:12:05.732 ' 00:12:05.732 11:22:47 blockdev_nvme_gpt -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:05.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.732 --rc genhtml_branch_coverage=1 00:12:05.732 --rc genhtml_function_coverage=1 00:12:05.732 --rc genhtml_legend=1 00:12:05.732 --rc geninfo_all_blocks=1 00:12:05.732 --rc geninfo_unexecuted_blocks=1 00:12:05.732 00:12:05.732 ' 00:12:05.732 11:22:47 blockdev_nvme_gpt -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:05.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.732 --rc genhtml_branch_coverage=1 00:12:05.732 --rc genhtml_function_coverage=1 00:12:05.732 --rc genhtml_legend=1 00:12:05.732 --rc geninfo_all_blocks=1 00:12:05.732 --rc geninfo_unexecuted_blocks=1 00:12:05.732 00:12:05.732 ' 00:12:05.732 11:22:47 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:05.732 11:22:47 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:12:05.732 11:22:47 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:12:05.732 11:22:47 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:05.732 11:22:47 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:12:05.732 11:22:47 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:12:05.732 11:22:47 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:12:05.732 11:22:47 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:12:05.732 11:22:47 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:12:05.732 11:22:47 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:12:05.732 11:22:47 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:12:05.732 11:22:47 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:12:05.732 11:22:47 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:12:05.732 11:22:47 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:12:05.732 11:22:47 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:12:05.732 11:22:47 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:12:05.732 11:22:47 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:12:05.732 11:22:47 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:12:05.732 11:22:47 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:12:05.732 11:22:47 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:12:05.732 11:22:47 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:12:05.732 11:22:47 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:12:05.732 11:22:47 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:12:05.732 11:22:47 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:12:05.732 11:22:47 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62583 00:12:05.732 11:22:47 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:12:05.732 11:22:47 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:05.732 11:22:47 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 62583 00:12:05.732 11:22:47 blockdev_nvme_gpt -- common/autotest_common.sh@831 -- # '[' -z 62583 ']' 00:12:05.732 11:22:47 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:05.732 11:22:47 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:05.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:05.732 11:22:47 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:05.732 11:22:47 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:05.732 11:22:47 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:05.732 [2024-10-07 11:22:47.368969] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:12:05.732 [2024-10-07 11:22:47.369124] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62583 ] 00:12:05.991 [2024-10-07 11:22:47.544942] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:06.250 [2024-10-07 11:22:47.757305] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:07.185 11:22:48 blockdev_nvme_gpt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:07.185 11:22:48 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # return 0 00:12:07.185 11:22:48 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:12:07.185 11:22:48 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:12:07.185 11:22:48 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:07.751 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:08.008 Waiting for block devices as requested 00:12:08.008 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:08.008 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:08.266 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:08.266 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:13.534 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:13.534 11:22:55 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:12:13.535 11:22:55 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:12:13.535 11:22:55 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:12:13.535 11:22:55 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # local nvme bdf 00:12:13.535 11:22:55 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:13.535 11:22:55 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:12:13.535 11:22:55 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:12:13.535 11:22:55 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:12:13.535 11:22:55 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:13.535 11:22:55 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:13.535 11:22:55 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:12:13.535 11:22:55 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:12:13.535 11:22:55 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:12:13.535 11:22:55 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:13.535 11:22:55 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:13.535 11:22:55 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:12:13.535 11:22:55 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:12:13.535 11:22:55 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:12:13.535 11:22:55 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:13.535 11:22:55 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:13.535 11:22:55 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:12:13.535 11:22:55 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:12:13.535 11:22:55 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:12:13.535 11:22:55 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:13.535 11:22:55 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:13.535 11:22:55 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:12:13.535 11:22:55 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:12:13.535 11:22:55 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:12:13.535 11:22:55 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:13.535 11:22:55 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:13.535 11:22:55 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:12:13.535 11:22:55 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:12:13.535 11:22:55 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:12:13.535 11:22:55 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:13.535 11:22:55 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:13.535 11:22:55 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:12:13.535 11:22:55 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:12:13.535 11:22:55 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:12:13.535 11:22:55 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:13.535 11:22:55 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:12:13.535 11:22:55 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:12:13.535 11:22:55 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:12:13.535 11:22:55 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:12:13.535 11:22:55 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:12:13.535 11:22:55 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:12:13.535 11:22:55 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:12:13.535 11:22:55 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:12:13.535 BYT; 00:12:13.535 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:12:13.535 11:22:55 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:12:13.535 BYT; 00:12:13.535 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:12:13.535 11:22:55 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:12:13.535 11:22:55 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:12:13.535 11:22:55 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:12:13.535 11:22:55 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:12:13.535 11:22:55 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:12:13.535 11:22:55 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:12:13.535 11:22:55 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:12:13.535 11:22:55 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:12:13.535 11:22:55 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:12:13.535 11:22:55 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:12:13.535 11:22:55 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:12:13.535 11:22:55 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:12:13.535 11:22:55 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:12:13.535 11:22:55 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:12:13.535 11:22:55 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:12:13.535 11:22:55 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:12:13.535 11:22:55 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:12:13.535 11:22:55 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:12:13.535 11:22:55 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:12:13.535 11:22:55 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:12:13.535 11:22:55 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:12:13.535 11:22:55 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:12:13.535 11:22:55 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:12:13.535 11:22:55 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:12:13.535 11:22:55 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:12:13.535 11:22:55 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:12:13.535 11:22:55 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:12:13.535 11:22:55 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:12:13.535 11:22:55 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:12:14.910 The operation has completed successfully. 00:12:14.910 11:22:56 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:12:15.847 The operation has completed successfully. 00:12:15.847 11:22:57 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:16.415 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:16.983 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:12:17.241 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:17.241 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:17.241 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:17.241 11:22:58 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:12:17.241 11:22:58 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.241 11:22:58 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:17.241 [] 00:12:17.241 11:22:58 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.241 11:22:58 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:12:17.241 11:22:58 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:12:17.241 11:22:58 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:12:17.241 11:22:58 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:17.500 11:22:59 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:12:17.500 11:22:59 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.500 11:22:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:17.760 11:22:59 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.760 11:22:59 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:12:17.760 11:22:59 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.760 11:22:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:17.760 11:22:59 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.760 11:22:59 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:12:17.760 11:22:59 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:12:17.760 11:22:59 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.760 11:22:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:17.760 11:22:59 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.760 11:22:59 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:12:17.760 11:22:59 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.760 11:22:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:17.760 11:22:59 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.760 11:22:59 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:12:17.760 11:22:59 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.760 11:22:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:17.760 11:22:59 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.760 11:22:59 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:12:17.760 11:22:59 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:12:17.760 11:22:59 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.760 11:22:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:17.760 11:22:59 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:12:18.019 11:22:59 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.019 11:22:59 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:12:18.019 11:22:59 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:12:18.020 11:22:59 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "85908865-2b50-4e8c-a13f-ddd933abebb9"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "85908865-2b50-4e8c-a13f-ddd933abebb9",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "ecb6e5ce-0748-4ed8-9626-bcd7f61a712c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ecb6e5ce-0748-4ed8-9626-bcd7f61a712c",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "b861cbc6-7f21-4aef-9afb-e5e3f627daaa"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b861cbc6-7f21-4aef-9afb-e5e3f627daaa",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "db30a29a-74c1-4846-a3bc-7bdfa60ac417"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "db30a29a-74c1-4846-a3bc-7bdfa60ac417",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "f5a989f4-19f3-4cfe-a65a-cfa1638d1ca6"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "f5a989f4-19f3-4cfe-a65a-cfa1638d1ca6",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:12:18.020 11:22:59 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:12:18.020 11:22:59 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:12:18.020 11:22:59 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:12:18.020 11:22:59 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 62583 00:12:18.020 11:22:59 blockdev_nvme_gpt -- common/autotest_common.sh@950 -- # '[' -z 62583 ']' 00:12:18.020 11:22:59 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # kill -0 62583 00:12:18.020 11:22:59 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # uname 00:12:18.020 11:22:59 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:18.020 11:22:59 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62583 00:12:18.020 11:22:59 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:18.020 11:22:59 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:18.020 killing process with pid 62583 00:12:18.020 11:22:59 blockdev_nvme_gpt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62583' 00:12:18.020 11:22:59 blockdev_nvme_gpt -- common/autotest_common.sh@969 -- # kill 62583 00:12:18.020 11:22:59 blockdev_nvme_gpt -- common/autotest_common.sh@974 -- # wait 62583 00:12:20.552 11:23:02 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:20.552 11:23:02 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:12:20.552 11:23:02 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:20.552 11:23:02 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:20.552 11:23:02 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:20.552 ************************************ 00:12:20.552 START TEST bdev_hello_world 00:12:20.552 ************************************ 00:12:20.552 11:23:02 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:12:20.810 [2024-10-07 11:23:02.314677] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:12:20.810 [2024-10-07 11:23:02.314845] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63236 ] 00:12:20.810 [2024-10-07 11:23:02.485066] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:21.068 [2024-10-07 11:23:02.701014] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.005 [2024-10-07 11:23:03.353588] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:12:22.005 [2024-10-07 11:23:03.353643] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:12:22.005 [2024-10-07 11:23:03.353665] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:12:22.005 [2024-10-07 11:23:03.356650] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:12:22.005 [2024-10-07 11:23:03.357391] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:12:22.005 [2024-10-07 11:23:03.357427] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:12:22.005 [2024-10-07 11:23:03.357759] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:12:22.005 00:12:22.005 [2024-10-07 11:23:03.357798] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:12:22.953 00:12:22.953 real 0m2.436s 00:12:22.953 user 0m2.065s 00:12:22.953 sys 0m0.262s 00:12:22.953 11:23:04 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:22.953 11:23:04 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:12:22.953 ************************************ 00:12:22.953 END TEST bdev_hello_world 00:12:22.953 ************************************ 00:12:23.211 11:23:04 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:12:23.211 11:23:04 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:23.211 11:23:04 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:23.211 11:23:04 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:23.211 ************************************ 00:12:23.211 START TEST bdev_bounds 00:12:23.211 ************************************ 00:12:23.211 11:23:04 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:12:23.211 11:23:04 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=63278 00:12:23.211 11:23:04 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:12:23.211 11:23:04 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:23.211 Process bdevio pid: 63278 00:12:23.211 11:23:04 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 63278' 00:12:23.211 11:23:04 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 63278 00:12:23.211 11:23:04 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 63278 ']' 00:12:23.211 11:23:04 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:23.211 11:23:04 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:23.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:23.211 11:23:04 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:23.211 11:23:04 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:23.211 11:23:04 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:23.211 [2024-10-07 11:23:04.820398] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:12:23.211 [2024-10-07 11:23:04.820533] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63278 ] 00:12:23.469 [2024-10-07 11:23:04.987900] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:23.727 [2024-10-07 11:23:05.232999] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:12:23.727 [2024-10-07 11:23:05.233150] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:23.727 [2024-10-07 11:23:05.233178] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:12:24.296 11:23:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:24.296 11:23:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:12:24.296 11:23:05 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:12:24.555 I/O targets: 00:12:24.555 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:12:24.555 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:12:24.555 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:12:24.555 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:24.555 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:24.555 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:24.555 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:12:24.555 00:12:24.555 00:12:24.555 CUnit - A unit testing framework for C - Version 2.1-3 00:12:24.555 http://cunit.sourceforge.net/ 00:12:24.555 00:12:24.555 00:12:24.555 Suite: bdevio tests on: Nvme3n1 00:12:24.555 Test: blockdev write read block ...passed 00:12:24.555 Test: blockdev write zeroes read block ...passed 00:12:24.555 Test: blockdev write zeroes read no split ...passed 00:12:24.555 Test: blockdev write zeroes read split ...passed 00:12:24.555 Test: blockdev write zeroes read split partial ...passed 00:12:24.555 Test: blockdev reset ...[2024-10-07 11:23:06.123221] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:12:24.555 [2024-10-07 11:23:06.127128] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:24.555 passed 00:12:24.555 Test: blockdev write read 8 blocks ...passed 00:12:24.555 Test: blockdev write read size > 128k ...passed 00:12:24.555 Test: blockdev write read invalid size ...passed 00:12:24.555 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:24.555 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:24.555 Test: blockdev write read max offset ...passed 00:12:24.555 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:24.555 Test: blockdev writev readv 8 blocks ...passed 00:12:24.555 Test: blockdev writev readv 30 x 1block ...passed 00:12:24.555 Test: blockdev writev readv block ...passed 00:12:24.555 Test: blockdev writev readv size > 128k ...passed 00:12:24.555 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:24.555 Test: blockdev comparev and writev ...[2024-10-07 11:23:06.135774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b0606000 len:0x1000 00:12:24.555 [2024-10-07 11:23:06.135980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:12:24.555 passed 00:12:24.555 Test: blockdev nvme passthru rw ...passed 00:12:24.555 Test: blockdev nvme passthru vendor specific ...[2024-10-07 11:23:06.137067] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:12:24.555 [2024-10-07 11:23:06.137265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:12:24.555 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:12:24.555 passed 00:12:24.555 Test: blockdev copy ...passed 00:12:24.555 Suite: bdevio tests on: Nvme2n3 00:12:24.555 Test: blockdev write read block ...passed 00:12:24.555 Test: blockdev write zeroes read block ...passed 00:12:24.555 Test: blockdev write zeroes read no split ...passed 00:12:24.555 Test: blockdev write zeroes read split ...passed 00:12:24.555 Test: blockdev write zeroes read split partial ...passed 00:12:24.555 Test: blockdev reset ...[2024-10-07 11:23:06.218981] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:12:24.555 passed 00:12:24.555 Test: blockdev write read 8 blocks ...[2024-10-07 11:23:06.224099] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:24.555 passed 00:12:24.555 Test: blockdev write read size > 128k ...passed 00:12:24.555 Test: blockdev write read invalid size ...passed 00:12:24.555 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:24.555 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:24.555 Test: blockdev write read max offset ...passed 00:12:24.555 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:24.555 Test: blockdev writev readv 8 blocks ...passed 00:12:24.555 Test: blockdev writev readv 30 x 1block ...passed 00:12:24.555 Test: blockdev writev readv block ...passed 00:12:24.555 Test: blockdev writev readv size > 128k ...passed 00:12:24.555 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:24.555 Test: blockdev comparev and writev ...[2024-10-07 11:23:06.233153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c083c000 len:0x1000 00:12:24.555 [2024-10-07 11:23:06.233235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:12:24.555 passed 00:12:24.555 Test: blockdev nvme passthru rw ...passed 00:12:24.555 Test: blockdev nvme passthru vendor specific ...passed 00:12:24.555 Test: blockdev nvme admin passthru ...[2024-10-07 11:23:06.234173] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:12:24.555 [2024-10-07 11:23:06.234208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:12:24.555 passed 00:12:24.555 Test: blockdev copy ...passed 00:12:24.555 Suite: bdevio tests on: Nvme2n2 00:12:24.555 Test: blockdev write read block ...passed 00:12:24.555 Test: blockdev write zeroes read block ...passed 00:12:24.555 Test: blockdev write zeroes read no split ...passed 00:12:24.813 Test: blockdev write zeroes read split ...passed 00:12:24.813 Test: blockdev write zeroes read split partial ...passed 00:12:24.813 Test: blockdev reset ...[2024-10-07 11:23:06.315297] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:12:24.813 [2024-10-07 11:23:06.320387] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:24.813 passed 00:12:24.813 Test: blockdev write read 8 blocks ...passed 00:12:24.813 Test: blockdev write read size > 128k ...passed 00:12:24.814 Test: blockdev write read invalid size ...passed 00:12:24.814 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:24.814 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:24.814 Test: blockdev write read max offset ...passed 00:12:24.814 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:24.814 Test: blockdev writev readv 8 blocks ...passed 00:12:24.814 Test: blockdev writev readv 30 x 1block ...passed 00:12:24.814 Test: blockdev writev readv block ...passed 00:12:24.814 Test: blockdev writev readv size > 128k ...passed 00:12:24.814 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:24.814 Test: blockdev comparev and writev ...passed 00:12:24.814 Test: blockdev nvme passthru rw ...[2024-10-07 11:23:06.329630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c0836000 len:0x1000 00:12:24.814 [2024-10-07 11:23:06.329711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:12:24.814 passed 00:12:24.814 Test: blockdev nvme passthru vendor specific ...[2024-10-07 11:23:06.330588] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:12:24.814 [2024-10-07 11:23:06.330623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:12:24.814 passed 00:12:24.814 Test: blockdev nvme admin passthru ...passed 00:12:24.814 Test: blockdev copy ...passed 00:12:24.814 Suite: bdevio tests on: Nvme2n1 00:12:24.814 Test: blockdev write read block ...passed 00:12:24.814 Test: blockdev write zeroes read block ...passed 00:12:24.814 Test: blockdev write zeroes read no split ...passed 00:12:24.814 Test: blockdev write zeroes read split ...passed 00:12:24.814 Test: blockdev write zeroes read split partial ...passed 00:12:24.814 Test: blockdev reset ...[2024-10-07 11:23:06.414069] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:12:24.814 passed 00:12:24.814 Test: blockdev write read 8 blocks ...[2024-10-07 11:23:06.419379] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:24.814 passed 00:12:24.814 Test: blockdev write read size > 128k ...passed 00:12:24.814 Test: blockdev write read invalid size ...passed 00:12:24.814 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:24.814 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:24.814 Test: blockdev write read max offset ...passed 00:12:24.814 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:24.814 Test: blockdev writev readv 8 blocks ...passed 00:12:24.814 Test: blockdev writev readv 30 x 1block ...passed 00:12:24.814 Test: blockdev writev readv block ...passed 00:12:24.814 Test: blockdev writev readv size > 128k ...passed 00:12:24.814 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:24.814 Test: blockdev comparev and writev ...[2024-10-07 11:23:06.429025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c0832000 len:0x1000 00:12:24.814 [2024-10-07 11:23:06.429109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:12:24.814 passed 00:12:24.814 Test: blockdev nvme passthru rw ...passed 00:12:24.814 Test: blockdev nvme passthru vendor specific ...passed 00:12:24.814 Test: blockdev nvme admin passthru ...[2024-10-07 11:23:06.430152] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:12:24.814 [2024-10-07 11:23:06.430188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:12:24.814 passed 00:12:24.814 Test: blockdev copy ...passed 00:12:24.814 Suite: bdevio tests on: Nvme1n1p2 00:12:24.814 Test: blockdev write read block ...passed 00:12:24.814 Test: blockdev write zeroes read block ...passed 00:12:24.814 Test: blockdev write zeroes read no split ...passed 00:12:24.814 Test: blockdev write zeroes read split ...passed 00:12:24.814 Test: blockdev write zeroes read split partial ...passed 00:12:24.814 Test: blockdev reset ...[2024-10-07 11:23:06.513160] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:12:24.814 passed 00:12:24.814 Test: blockdev write read 8 blocks ...[2024-10-07 11:23:06.517696] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:24.814 passed 00:12:24.814 Test: blockdev write read size > 128k ...passed 00:12:24.814 Test: blockdev write read invalid size ...passed 00:12:24.814 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:24.814 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:24.814 Test: blockdev write read max offset ...passed 00:12:24.814 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:24.814 Test: blockdev writev readv 8 blocks ...passed 00:12:24.814 Test: blockdev writev readv 30 x 1block ...passed 00:12:25.072 Test: blockdev writev readv block ...passed 00:12:25.072 Test: blockdev writev readv size > 128k ...passed 00:12:25.072 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:25.072 Test: blockdev comparev and writev ...[2024-10-07 11:23:06.527252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2c082e000 len:0x1000 00:12:25.072 [2024-10-07 11:23:06.527333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:12:25.072 passed 00:12:25.072 Test: blockdev nvme passthru rw ...passed 00:12:25.072 Test: blockdev nvme passthru vendor specific ...passed 00:12:25.072 Test: blockdev nvme admin passthru ...passed 00:12:25.072 Test: blockdev copy ...passed 00:12:25.072 Suite: bdevio tests on: Nvme1n1p1 00:12:25.072 Test: blockdev write read block ...passed 00:12:25.072 Test: blockdev write zeroes read block ...passed 00:12:25.072 Test: blockdev write zeroes read no split ...passed 00:12:25.072 Test: blockdev write zeroes read split ...passed 00:12:25.072 Test: blockdev write zeroes read split partial ...passed 00:12:25.072 Test: blockdev reset ...[2024-10-07 11:23:06.599644] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:12:25.072 [2024-10-07 11:23:06.604298] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:25.072 passed 00:12:25.072 Test: blockdev write read 8 blocks ...passed 00:12:25.072 Test: blockdev write read size > 128k ...passed 00:12:25.072 Test: blockdev write read invalid size ...passed 00:12:25.072 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:25.072 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:25.072 Test: blockdev write read max offset ...passed 00:12:25.072 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:25.072 Test: blockdev writev readv 8 blocks ...passed 00:12:25.072 Test: blockdev writev readv 30 x 1block ...passed 00:12:25.072 Test: blockdev writev readv block ...passed 00:12:25.072 Test: blockdev writev readv size > 128k ...passed 00:12:25.072 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:25.072 Test: blockdev comparev and writev ...[2024-10-07 11:23:06.613497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2b600e000 len:0x1000 00:12:25.072 [2024-10-07 11:23:06.613579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:12:25.072 passed 00:12:25.072 Test: blockdev nvme passthru rw ...passed 00:12:25.072 Test: blockdev nvme passthru vendor specific ...passed 00:12:25.072 Test: blockdev nvme admin passthru ...passed 00:12:25.072 Test: blockdev copy ...passed 00:12:25.072 Suite: bdevio tests on: Nvme0n1 00:12:25.072 Test: blockdev write read block ...passed 00:12:25.072 Test: blockdev write zeroes read block ...passed 00:12:25.072 Test: blockdev write zeroes read no split ...passed 00:12:25.072 Test: blockdev write zeroes read split ...passed 00:12:25.072 Test: blockdev write zeroes read split partial ...passed 00:12:25.072 Test: blockdev reset ...[2024-10-07 11:23:06.684958] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:12:25.072 passed 00:12:25.072 Test: blockdev write read 8 blocks ...[2024-10-07 11:23:06.689701] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:25.072 passed 00:12:25.072 Test: blockdev write read size > 128k ...passed 00:12:25.072 Test: blockdev write read invalid size ...passed 00:12:25.072 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:25.072 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:25.072 Test: blockdev write read max offset ...passed 00:12:25.072 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:25.072 Test: blockdev writev readv 8 blocks ...passed 00:12:25.072 Test: blockdev writev readv 30 x 1block ...passed 00:12:25.072 Test: blockdev writev readv block ...passed 00:12:25.072 Test: blockdev writev readv size > 128k ...passed 00:12:25.072 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:25.072 Test: blockdev comparev and writev ...passed 00:12:25.073 Test: blockdev nvme passthru rw ...[2024-10-07 11:23:06.697399] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:12:25.073 separate metadata which is not supported yet. 00:12:25.073 passed 00:12:25.073 Test: blockdev nvme passthru vendor specific ...[2024-10-07 11:23:06.698098] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:12:25.073 [2024-10-07 11:23:06.698181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:12:25.073 passed 00:12:25.073 Test: blockdev nvme admin passthru ...passed 00:12:25.073 Test: blockdev copy ...passed 00:12:25.073 00:12:25.073 Run Summary: Type Total Ran Passed Failed Inactive 00:12:25.073 suites 7 7 n/a 0 0 00:12:25.073 tests 161 161 161 0 0 00:12:25.073 asserts 1025 1025 1025 0 n/a 00:12:25.073 00:12:25.073 Elapsed time = 1.767 seconds 00:12:25.073 0 00:12:25.073 11:23:06 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 63278 00:12:25.073 11:23:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 63278 ']' 00:12:25.073 11:23:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 63278 00:12:25.073 11:23:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:12:25.073 11:23:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:25.073 11:23:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63278 00:12:25.073 11:23:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:25.073 11:23:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:25.073 killing process with pid 63278 00:12:25.073 11:23:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63278' 00:12:25.073 11:23:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@969 -- # kill 63278 00:12:25.073 11:23:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@974 -- # wait 63278 00:12:26.448 11:23:07 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:12:26.448 00:12:26.448 real 0m3.162s 00:12:26.448 user 0m7.745s 00:12:26.448 sys 0m0.452s 00:12:26.448 11:23:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:26.448 11:23:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:26.448 ************************************ 00:12:26.448 END TEST bdev_bounds 00:12:26.448 ************************************ 00:12:26.448 11:23:07 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:12:26.448 11:23:07 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:26.448 11:23:07 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:26.448 11:23:07 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:26.448 ************************************ 00:12:26.448 START TEST bdev_nbd 00:12:26.448 ************************************ 00:12:26.448 11:23:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:12:26.448 11:23:07 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:12:26.448 11:23:07 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:12:26.448 11:23:07 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:26.448 11:23:07 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:26.448 11:23:07 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:12:26.448 11:23:07 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:12:26.448 11:23:07 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:12:26.448 11:23:07 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:12:26.448 11:23:07 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:26.448 11:23:07 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:12:26.448 11:23:07 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:12:26.448 11:23:07 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:12:26.448 11:23:07 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:12:26.448 11:23:07 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:12:26.448 11:23:07 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:12:26.448 11:23:07 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=63343 00:12:26.448 11:23:07 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:26.448 11:23:07 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:12:26.448 11:23:07 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 63343 /var/tmp/spdk-nbd.sock 00:12:26.448 11:23:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 63343 ']' 00:12:26.448 11:23:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:26.448 11:23:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:26.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:26.448 11:23:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:26.448 11:23:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:26.448 11:23:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:12:26.448 [2024-10-07 11:23:08.061072] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:12:26.448 [2024-10-07 11:23:08.061198] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:26.706 [2024-10-07 11:23:08.233980] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:26.964 [2024-10-07 11:23:08.462540] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.529 11:23:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:27.529 11:23:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:12:27.529 11:23:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:12:27.529 11:23:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:27.529 11:23:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:12:27.529 11:23:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:12:27.529 11:23:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:12:27.529 11:23:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:27.529 11:23:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:12:27.529 11:23:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:12:27.529 11:23:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:12:27.529 11:23:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:12:27.529 11:23:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:12:27.529 11:23:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:12:27.529 11:23:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:12:27.787 11:23:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:12:27.787 11:23:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:12:27.787 11:23:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:12:27.787 11:23:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:27.787 11:23:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:27.787 11:23:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:27.787 11:23:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:27.787 11:23:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:27.787 11:23:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:27.787 11:23:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:27.787 11:23:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:27.787 11:23:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:27.787 1+0 records in 00:12:27.787 1+0 records out 00:12:27.787 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000679628 s, 6.0 MB/s 00:12:27.787 11:23:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:27.787 11:23:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:27.787 11:23:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:27.787 11:23:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:27.787 11:23:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:27.787 11:23:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:27.787 11:23:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:12:27.787 11:23:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:12:28.046 11:23:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:12:28.046 11:23:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:12:28.046 11:23:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:12:28.046 11:23:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:28.046 11:23:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:28.047 11:23:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:28.047 11:23:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:28.047 11:23:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:28.047 11:23:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:28.047 11:23:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:28.047 11:23:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:28.047 11:23:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:28.047 1+0 records in 00:12:28.047 1+0 records out 00:12:28.047 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00055659 s, 7.4 MB/s 00:12:28.047 11:23:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:28.047 11:23:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:28.047 11:23:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:28.047 11:23:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:28.047 11:23:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:28.047 11:23:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:28.047 11:23:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:12:28.047 11:23:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:12:28.304 11:23:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:12:28.304 11:23:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:12:28.304 11:23:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:12:28.304 11:23:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:12:28.304 11:23:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:28.304 11:23:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:28.304 11:23:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:28.304 11:23:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:12:28.304 11:23:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:28.304 11:23:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:28.304 11:23:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:28.304 11:23:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:28.304 1+0 records in 00:12:28.304 1+0 records out 00:12:28.304 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000657358 s, 6.2 MB/s 00:12:28.304 11:23:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:28.304 11:23:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:28.304 11:23:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:28.304 11:23:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:28.304 11:23:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:28.304 11:23:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:28.304 11:23:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:12:28.304 11:23:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:12:28.562 11:23:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:12:28.562 11:23:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:12:28.562 11:23:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:12:28.562 11:23:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:12:28.562 11:23:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:28.562 11:23:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:28.562 11:23:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:28.562 11:23:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:12:28.562 11:23:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:28.562 11:23:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:28.562 11:23:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:28.562 11:23:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:28.562 1+0 records in 00:12:28.562 1+0 records out 00:12:28.562 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000513581 s, 8.0 MB/s 00:12:28.562 11:23:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:28.562 11:23:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:28.562 11:23:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:28.821 11:23:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:28.821 11:23:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:28.821 11:23:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:28.821 11:23:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:12:28.821 11:23:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:12:28.821 11:23:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:12:28.821 11:23:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:12:28.821 11:23:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:12:28.821 11:23:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:12:28.821 11:23:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:28.821 11:23:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:28.821 11:23:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:28.821 11:23:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:12:28.821 11:23:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:28.821 11:23:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:28.821 11:23:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:28.821 11:23:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:28.821 1+0 records in 00:12:28.821 1+0 records out 00:12:28.821 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000960265 s, 4.3 MB/s 00:12:28.821 11:23:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:29.079 11:23:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:29.079 11:23:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:29.079 11:23:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:29.079 11:23:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:29.079 11:23:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:29.079 11:23:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:12:29.079 11:23:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:12:29.338 11:23:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:12:29.338 11:23:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:12:29.338 11:23:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:12:29.338 11:23:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:12:29.338 11:23:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:29.338 11:23:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:29.338 11:23:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:29.338 11:23:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:12:29.338 11:23:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:29.338 11:23:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:29.338 11:23:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:29.338 11:23:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:29.338 1+0 records in 00:12:29.338 1+0 records out 00:12:29.338 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000890605 s, 4.6 MB/s 00:12:29.338 11:23:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:29.338 11:23:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:29.338 11:23:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:29.338 11:23:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:29.338 11:23:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:29.338 11:23:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:29.338 11:23:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:12:29.338 11:23:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:12:29.597 11:23:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:12:29.597 11:23:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:12:29.597 11:23:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:12:29.597 11:23:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd6 00:12:29.597 11:23:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:29.597 11:23:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:29.597 11:23:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:29.597 11:23:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd6 /proc/partitions 00:12:29.597 11:23:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:29.597 11:23:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:29.597 11:23:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:29.597 11:23:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:29.597 1+0 records in 00:12:29.597 1+0 records out 00:12:29.597 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000780328 s, 5.2 MB/s 00:12:29.597 11:23:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:29.597 11:23:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:29.597 11:23:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:29.597 11:23:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:29.597 11:23:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:29.597 11:23:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:29.597 11:23:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:12:29.597 11:23:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:29.855 11:23:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:12:29.855 { 00:12:29.855 "nbd_device": "/dev/nbd0", 00:12:29.855 "bdev_name": "Nvme0n1" 00:12:29.855 }, 00:12:29.855 { 00:12:29.855 "nbd_device": "/dev/nbd1", 00:12:29.855 "bdev_name": "Nvme1n1p1" 00:12:29.855 }, 00:12:29.855 { 00:12:29.855 "nbd_device": "/dev/nbd2", 00:12:29.855 "bdev_name": "Nvme1n1p2" 00:12:29.855 }, 00:12:29.855 { 00:12:29.855 "nbd_device": "/dev/nbd3", 00:12:29.855 "bdev_name": "Nvme2n1" 00:12:29.855 }, 00:12:29.855 { 00:12:29.855 "nbd_device": "/dev/nbd4", 00:12:29.855 "bdev_name": "Nvme2n2" 00:12:29.855 }, 00:12:29.855 { 00:12:29.855 "nbd_device": "/dev/nbd5", 00:12:29.855 "bdev_name": "Nvme2n3" 00:12:29.855 }, 00:12:29.855 { 00:12:29.855 "nbd_device": "/dev/nbd6", 00:12:29.855 "bdev_name": "Nvme3n1" 00:12:29.855 } 00:12:29.855 ]' 00:12:29.855 11:23:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:12:29.855 11:23:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:12:29.855 11:23:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:12:29.855 { 00:12:29.855 "nbd_device": "/dev/nbd0", 00:12:29.855 "bdev_name": "Nvme0n1" 00:12:29.855 }, 00:12:29.855 { 00:12:29.855 "nbd_device": "/dev/nbd1", 00:12:29.855 "bdev_name": "Nvme1n1p1" 00:12:29.855 }, 00:12:29.855 { 00:12:29.855 "nbd_device": "/dev/nbd2", 00:12:29.855 "bdev_name": "Nvme1n1p2" 00:12:29.855 }, 00:12:29.855 { 00:12:29.855 "nbd_device": "/dev/nbd3", 00:12:29.855 "bdev_name": "Nvme2n1" 00:12:29.855 }, 00:12:29.855 { 00:12:29.855 "nbd_device": "/dev/nbd4", 00:12:29.855 "bdev_name": "Nvme2n2" 00:12:29.855 }, 00:12:29.855 { 00:12:29.855 "nbd_device": "/dev/nbd5", 00:12:29.855 "bdev_name": "Nvme2n3" 00:12:29.855 }, 00:12:29.855 { 00:12:29.855 "nbd_device": "/dev/nbd6", 00:12:29.855 "bdev_name": "Nvme3n1" 00:12:29.855 } 00:12:29.855 ]' 00:12:29.855 11:23:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:12:29.855 11:23:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:29.855 11:23:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:12:29.855 11:23:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:29.855 11:23:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:29.855 11:23:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:29.855 11:23:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:30.114 11:23:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:30.114 11:23:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:30.114 11:23:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:30.114 11:23:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:30.114 11:23:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:30.114 11:23:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:30.114 11:23:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:30.114 11:23:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:30.114 11:23:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:30.114 11:23:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:30.372 11:23:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:30.372 11:23:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:30.372 11:23:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:30.372 11:23:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:30.372 11:23:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:30.372 11:23:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:30.372 11:23:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:30.372 11:23:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:30.372 11:23:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:30.372 11:23:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:30.631 11:23:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:30.631 11:23:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:30.631 11:23:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:30.631 11:23:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:30.631 11:23:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:30.631 11:23:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:30.631 11:23:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:30.631 11:23:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:30.631 11:23:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:30.631 11:23:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:30.889 11:23:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:30.889 11:23:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:30.889 11:23:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:30.889 11:23:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:30.889 11:23:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:30.889 11:23:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:30.889 11:23:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:30.889 11:23:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:30.889 11:23:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:30.889 11:23:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:31.148 11:23:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:31.148 11:23:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:31.148 11:23:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:31.148 11:23:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:31.148 11:23:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:31.148 11:23:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:31.148 11:23:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:31.148 11:23:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:31.148 11:23:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:31.148 11:23:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:31.407 11:23:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:31.407 11:23:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:31.407 11:23:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:31.407 11:23:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:31.407 11:23:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:31.407 11:23:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:31.407 11:23:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:31.407 11:23:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:31.407 11:23:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:31.407 11:23:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:12:31.665 11:23:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:12:31.665 11:23:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:12:31.665 11:23:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:12:31.665 11:23:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:31.665 11:23:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:31.665 11:23:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:12:31.665 11:23:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:31.665 11:23:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:31.665 11:23:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:31.665 11:23:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:31.665 11:23:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:31.924 11:23:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:31.924 11:23:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:31.924 11:23:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:31.924 11:23:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:31.924 11:23:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:12:31.924 11:23:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:31.924 11:23:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:12:31.924 11:23:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:12:31.924 11:23:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:12:31.924 11:23:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:12:31.924 11:23:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:12:31.924 11:23:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:12:31.924 11:23:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:12:31.924 11:23:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:31.924 11:23:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:12:31.925 11:23:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:31.925 11:23:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:12:31.925 11:23:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:31.925 11:23:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:12:31.925 11:23:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:31.925 11:23:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:12:31.925 11:23:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:31.925 11:23:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:12:31.925 11:23:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:31.925 11:23:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:12:31.925 11:23:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:31.925 11:23:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:12:31.925 11:23:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:12:32.183 /dev/nbd0 00:12:32.183 11:23:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:32.183 11:23:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:32.183 11:23:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:32.183 11:23:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:32.183 11:23:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:32.183 11:23:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:32.183 11:23:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:32.183 11:23:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:32.183 11:23:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:32.183 11:23:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:32.183 11:23:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:32.183 1+0 records in 00:12:32.183 1+0 records out 00:12:32.183 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000455206 s, 9.0 MB/s 00:12:32.183 11:23:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:32.183 11:23:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:32.183 11:23:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:32.183 11:23:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:32.183 11:23:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:32.183 11:23:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:32.183 11:23:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:12:32.183 11:23:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:12:32.442 /dev/nbd1 00:12:32.442 11:23:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:32.442 11:23:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:32.442 11:23:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:32.442 11:23:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:32.442 11:23:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:32.442 11:23:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:32.442 11:23:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:32.442 11:23:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:32.442 11:23:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:32.442 11:23:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:32.442 11:23:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:32.442 1+0 records in 00:12:32.442 1+0 records out 00:12:32.442 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000482212 s, 8.5 MB/s 00:12:32.442 11:23:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:32.442 11:23:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:32.442 11:23:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:32.442 11:23:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:32.442 11:23:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:32.442 11:23:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:32.442 11:23:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:12:32.442 11:23:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:12:32.701 /dev/nbd10 00:12:32.701 11:23:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:12:32.701 11:23:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:12:32.701 11:23:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:12:32.701 11:23:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:32.701 11:23:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:32.701 11:23:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:32.701 11:23:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:12:32.701 11:23:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:32.701 11:23:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:32.701 11:23:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:32.701 11:23:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:32.701 1+0 records in 00:12:32.701 1+0 records out 00:12:32.701 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000651291 s, 6.3 MB/s 00:12:32.701 11:23:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:32.701 11:23:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:32.701 11:23:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:32.701 11:23:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:32.701 11:23:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:32.701 11:23:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:32.701 11:23:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:12:32.701 11:23:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:12:32.960 /dev/nbd11 00:12:32.960 11:23:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:12:32.960 11:23:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:12:32.960 11:23:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:12:32.960 11:23:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:32.960 11:23:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:32.960 11:23:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:32.960 11:23:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:12:32.960 11:23:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:32.960 11:23:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:32.960 11:23:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:32.960 11:23:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:32.960 1+0 records in 00:12:32.960 1+0 records out 00:12:32.960 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00073967 s, 5.5 MB/s 00:12:32.960 11:23:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:32.960 11:23:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:32.960 11:23:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:32.960 11:23:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:32.960 11:23:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:32.960 11:23:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:32.960 11:23:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:12:32.960 11:23:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:12:33.218 /dev/nbd12 00:12:33.218 11:23:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:12:33.218 11:23:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:12:33.218 11:23:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:12:33.218 11:23:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:33.218 11:23:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:33.219 11:23:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:33.219 11:23:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:12:33.219 11:23:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:33.219 11:23:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:33.219 11:23:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:33.219 11:23:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:33.219 1+0 records in 00:12:33.219 1+0 records out 00:12:33.219 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000945893 s, 4.3 MB/s 00:12:33.219 11:23:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:33.219 11:23:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:33.219 11:23:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:33.219 11:23:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:33.219 11:23:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:33.219 11:23:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:33.219 11:23:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:12:33.219 11:23:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:12:33.476 /dev/nbd13 00:12:33.476 11:23:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:12:33.476 11:23:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:12:33.476 11:23:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:12:33.476 11:23:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:33.476 11:23:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:33.476 11:23:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:33.476 11:23:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:12:33.476 11:23:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:33.476 11:23:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:33.476 11:23:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:33.476 11:23:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:33.476 1+0 records in 00:12:33.476 1+0 records out 00:12:33.476 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00076126 s, 5.4 MB/s 00:12:33.476 11:23:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:33.476 11:23:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:33.476 11:23:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:33.476 11:23:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:33.476 11:23:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:33.476 11:23:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:33.476 11:23:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:12:33.476 11:23:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:12:33.735 /dev/nbd14 00:12:33.735 11:23:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:12:33.735 11:23:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:12:33.735 11:23:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd14 00:12:33.735 11:23:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:33.735 11:23:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:33.735 11:23:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:33.735 11:23:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd14 /proc/partitions 00:12:33.735 11:23:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:33.735 11:23:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:33.735 11:23:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:33.735 11:23:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:33.736 1+0 records in 00:12:33.736 1+0 records out 00:12:33.736 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000925958 s, 4.4 MB/s 00:12:33.736 11:23:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:33.736 11:23:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:33.736 11:23:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:33.995 11:23:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:33.995 11:23:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:33.995 11:23:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:33.995 11:23:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:12:33.995 11:23:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:33.995 11:23:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:33.995 11:23:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:33.995 11:23:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:33.995 { 00:12:33.995 "nbd_device": "/dev/nbd0", 00:12:33.995 "bdev_name": "Nvme0n1" 00:12:33.995 }, 00:12:33.995 { 00:12:33.995 "nbd_device": "/dev/nbd1", 00:12:33.995 "bdev_name": "Nvme1n1p1" 00:12:33.995 }, 00:12:33.995 { 00:12:33.995 "nbd_device": "/dev/nbd10", 00:12:33.995 "bdev_name": "Nvme1n1p2" 00:12:33.995 }, 00:12:33.995 { 00:12:33.995 "nbd_device": "/dev/nbd11", 00:12:33.995 "bdev_name": "Nvme2n1" 00:12:33.995 }, 00:12:33.995 { 00:12:33.995 "nbd_device": "/dev/nbd12", 00:12:33.995 "bdev_name": "Nvme2n2" 00:12:33.995 }, 00:12:33.995 { 00:12:33.995 "nbd_device": "/dev/nbd13", 00:12:33.995 "bdev_name": "Nvme2n3" 00:12:33.995 }, 00:12:33.995 { 00:12:33.995 "nbd_device": "/dev/nbd14", 00:12:33.995 "bdev_name": "Nvme3n1" 00:12:33.995 } 00:12:33.995 ]' 00:12:33.995 11:23:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:33.995 { 00:12:33.995 "nbd_device": "/dev/nbd0", 00:12:33.995 "bdev_name": "Nvme0n1" 00:12:33.995 }, 00:12:33.995 { 00:12:33.995 "nbd_device": "/dev/nbd1", 00:12:33.995 "bdev_name": "Nvme1n1p1" 00:12:33.995 }, 00:12:33.995 { 00:12:33.995 "nbd_device": "/dev/nbd10", 00:12:33.995 "bdev_name": "Nvme1n1p2" 00:12:33.995 }, 00:12:33.995 { 00:12:33.995 "nbd_device": "/dev/nbd11", 00:12:33.995 "bdev_name": "Nvme2n1" 00:12:33.995 }, 00:12:33.995 { 00:12:33.995 "nbd_device": "/dev/nbd12", 00:12:33.995 "bdev_name": "Nvme2n2" 00:12:33.995 }, 00:12:33.995 { 00:12:33.995 "nbd_device": "/dev/nbd13", 00:12:33.995 "bdev_name": "Nvme2n3" 00:12:33.995 }, 00:12:33.995 { 00:12:33.995 "nbd_device": "/dev/nbd14", 00:12:33.995 "bdev_name": "Nvme3n1" 00:12:33.995 } 00:12:33.995 ]' 00:12:33.995 11:23:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:34.254 11:23:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:34.254 /dev/nbd1 00:12:34.254 /dev/nbd10 00:12:34.254 /dev/nbd11 00:12:34.254 /dev/nbd12 00:12:34.254 /dev/nbd13 00:12:34.254 /dev/nbd14' 00:12:34.254 11:23:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:34.254 /dev/nbd1 00:12:34.254 /dev/nbd10 00:12:34.254 /dev/nbd11 00:12:34.254 /dev/nbd12 00:12:34.254 /dev/nbd13 00:12:34.254 /dev/nbd14' 00:12:34.254 11:23:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:34.254 11:23:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:12:34.254 11:23:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:12:34.254 11:23:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:12:34.254 11:23:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:12:34.254 11:23:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:12:34.254 11:23:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:12:34.254 11:23:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:34.254 11:23:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:34.254 11:23:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:34.254 11:23:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:34.254 11:23:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:12:34.254 256+0 records in 00:12:34.254 256+0 records out 00:12:34.254 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0122322 s, 85.7 MB/s 00:12:34.255 11:23:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:34.255 11:23:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:34.255 256+0 records in 00:12:34.255 256+0 records out 00:12:34.255 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.133608 s, 7.8 MB/s 00:12:34.255 11:23:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:34.255 11:23:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:34.513 256+0 records in 00:12:34.513 256+0 records out 00:12:34.513 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.140225 s, 7.5 MB/s 00:12:34.513 11:23:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:34.513 11:23:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:12:34.513 256+0 records in 00:12:34.513 256+0 records out 00:12:34.513 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.138853 s, 7.6 MB/s 00:12:34.513 11:23:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:34.513 11:23:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:12:34.772 256+0 records in 00:12:34.772 256+0 records out 00:12:34.772 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.138828 s, 7.6 MB/s 00:12:34.772 11:23:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:34.772 11:23:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:12:35.032 256+0 records in 00:12:35.032 256+0 records out 00:12:35.032 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.14015 s, 7.5 MB/s 00:12:35.032 11:23:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:35.032 11:23:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:12:35.032 256+0 records in 00:12:35.032 256+0 records out 00:12:35.032 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.137853 s, 7.6 MB/s 00:12:35.032 11:23:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:35.032 11:23:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:12:35.291 256+0 records in 00:12:35.291 256+0 records out 00:12:35.291 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.15405 s, 6.8 MB/s 00:12:35.291 11:23:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:12:35.291 11:23:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:12:35.291 11:23:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:35.291 11:23:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:35.291 11:23:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:35.291 11:23:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:35.291 11:23:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:35.291 11:23:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:35.291 11:23:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:12:35.291 11:23:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:35.291 11:23:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:12:35.291 11:23:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:35.291 11:23:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:12:35.291 11:23:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:35.291 11:23:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:12:35.291 11:23:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:35.291 11:23:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:12:35.291 11:23:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:35.291 11:23:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:12:35.291 11:23:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:35.291 11:23:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:12:35.291 11:23:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:35.291 11:23:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:12:35.292 11:23:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:35.292 11:23:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:12:35.292 11:23:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:35.292 11:23:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:35.292 11:23:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:35.292 11:23:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:35.550 11:23:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:35.550 11:23:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:35.550 11:23:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:35.550 11:23:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:35.550 11:23:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:35.550 11:23:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:35.550 11:23:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:35.550 11:23:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:35.550 11:23:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:35.550 11:23:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:35.809 11:23:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:35.809 11:23:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:35.809 11:23:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:35.809 11:23:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:35.809 11:23:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:35.809 11:23:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:35.809 11:23:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:35.809 11:23:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:35.809 11:23:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:35.809 11:23:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:12:36.067 11:23:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:12:36.067 11:23:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:12:36.067 11:23:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:12:36.067 11:23:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:36.067 11:23:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:36.067 11:23:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:36.067 11:23:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:36.067 11:23:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:36.067 11:23:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:36.067 11:23:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:12:36.326 11:23:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:12:36.326 11:23:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:12:36.326 11:23:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:12:36.326 11:23:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:36.326 11:23:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:36.326 11:23:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:36.326 11:23:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:36.326 11:23:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:36.326 11:23:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:36.326 11:23:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:12:36.584 11:23:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:12:36.584 11:23:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:12:36.584 11:23:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:12:36.584 11:23:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:36.584 11:23:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:36.584 11:23:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:36.584 11:23:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:36.584 11:23:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:36.584 11:23:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:36.584 11:23:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:12:36.843 11:23:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:12:36.843 11:23:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:12:36.843 11:23:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:12:36.843 11:23:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:36.843 11:23:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:36.843 11:23:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:36.843 11:23:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:36.843 11:23:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:36.843 11:23:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:36.843 11:23:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:12:37.103 11:23:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:12:37.103 11:23:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:12:37.103 11:23:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:12:37.103 11:23:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:37.103 11:23:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:37.103 11:23:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:12:37.103 11:23:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:37.103 11:23:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:37.103 11:23:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:37.103 11:23:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:37.103 11:23:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:37.362 11:23:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:37.362 11:23:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:37.362 11:23:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:37.362 11:23:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:37.362 11:23:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:12:37.362 11:23:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:37.362 11:23:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:12:37.362 11:23:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:12:37.362 11:23:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:12:37.362 11:23:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:12:37.362 11:23:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:37.362 11:23:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:12:37.362 11:23:18 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:37.362 11:23:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:37.362 11:23:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:12:37.362 11:23:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:12:37.621 malloc_lvol_verify 00:12:37.621 11:23:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:12:37.880 dec8d5c4-66a4-4c64-91fd-0e102a58f1ed 00:12:37.880 11:23:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:12:38.139 77899336-66b1-4f0b-b170-d24d6a70b5d7 00:12:38.139 11:23:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:12:38.139 /dev/nbd0 00:12:38.139 11:23:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:12:38.139 11:23:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:12:38.139 11:23:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:12:38.139 11:23:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:12:38.139 11:23:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:12:38.398 mke2fs 1.47.0 (5-Feb-2023) 00:12:38.398 Discarding device blocks: 0/4096 done 00:12:38.398 Creating filesystem with 4096 1k blocks and 1024 inodes 00:12:38.398 00:12:38.398 Allocating group tables: 0/1 done 00:12:38.398 Writing inode tables: 0/1 done 00:12:38.398 Creating journal (1024 blocks): done 00:12:38.398 Writing superblocks and filesystem accounting information: 0/1 done 00:12:38.398 00:12:38.398 11:23:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:38.398 11:23:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:38.398 11:23:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:38.398 11:23:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:38.398 11:23:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:38.398 11:23:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:38.398 11:23:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:38.398 11:23:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:38.398 11:23:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:38.398 11:23:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:38.398 11:23:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:38.398 11:23:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:38.398 11:23:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:38.398 11:23:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:38.398 11:23:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:38.398 11:23:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 63343 00:12:38.398 11:23:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 63343 ']' 00:12:38.398 11:23:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 63343 00:12:38.398 11:23:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:12:38.398 11:23:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:38.398 11:23:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63343 00:12:38.658 11:23:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:38.658 killing process with pid 63343 00:12:38.658 11:23:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:38.658 11:23:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63343' 00:12:38.658 11:23:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@969 -- # kill 63343 00:12:38.658 11:23:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@974 -- # wait 63343 00:12:40.035 11:23:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:12:40.035 00:12:40.035 real 0m13.559s 00:12:40.035 user 0m17.550s 00:12:40.035 sys 0m5.746s 00:12:40.035 11:23:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:40.035 ************************************ 00:12:40.035 END TEST bdev_nbd 00:12:40.035 ************************************ 00:12:40.035 11:23:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:12:40.035 11:23:21 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:12:40.035 11:23:21 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:12:40.035 11:23:21 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:12:40.035 skipping fio tests on NVMe due to multi-ns failures. 00:12:40.035 11:23:21 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:12:40.035 11:23:21 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:40.035 11:23:21 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:40.035 11:23:21 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:12:40.035 11:23:21 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:40.035 11:23:21 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:40.035 ************************************ 00:12:40.035 START TEST bdev_verify 00:12:40.035 ************************************ 00:12:40.035 11:23:21 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:40.035 [2024-10-07 11:23:21.677194] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:12:40.035 [2024-10-07 11:23:21.677326] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63787 ] 00:12:40.294 [2024-10-07 11:23:21.850895] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:40.552 [2024-10-07 11:23:22.077036] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:40.552 [2024-10-07 11:23:22.077069] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:12:41.121 Running I/O for 5 seconds... 00:12:43.555 19456.00 IOPS, 76.00 MiB/s [2024-10-07T11:23:26.201Z] 19776.00 IOPS, 77.25 MiB/s [2024-10-07T11:23:27.137Z] 20202.67 IOPS, 78.92 MiB/s [2024-10-07T11:23:28.073Z] 20784.00 IOPS, 81.19 MiB/s [2024-10-07T11:23:28.073Z] 20697.60 IOPS, 80.85 MiB/s 00:12:46.362 Latency(us) 00:12:46.362 [2024-10-07T11:23:28.073Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:46.362 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:46.362 Verification LBA range: start 0x0 length 0xbd0bd 00:12:46.362 Nvme0n1 : 5.08 1486.45 5.81 0.00 0.00 85929.88 17686.82 96014.19 00:12:46.362 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:46.362 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:12:46.362 Nvme0n1 : 5.04 1422.29 5.56 0.00 0.00 89692.63 20108.23 95171.96 00:12:46.363 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:46.363 Verification LBA range: start 0x0 length 0x4ff80 00:12:46.363 Nvme1n1p1 : 5.08 1485.61 5.80 0.00 0.00 85587.92 18739.61 75800.67 00:12:46.363 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:46.363 Verification LBA range: start 0x4ff80 length 0x4ff80 00:12:46.363 Nvme1n1p1 : 5.04 1421.87 5.55 0.00 0.00 89605.38 19266.00 89276.35 00:12:46.363 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:46.363 Verification LBA range: start 0x0 length 0x4ff7f 00:12:46.363 Nvme1n1p2 : 5.09 1485.05 5.80 0.00 0.00 85475.19 18107.94 69483.95 00:12:46.363 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:46.363 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:12:46.363 Nvme1n1p2 : 5.08 1435.12 5.61 0.00 0.00 88624.02 11791.22 77906.25 00:12:46.363 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:46.363 Verification LBA range: start 0x0 length 0x80000 00:12:46.363 Nvme2n1 : 5.09 1484.42 5.80 0.00 0.00 85360.80 18739.61 64851.69 00:12:46.363 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:46.363 Verification LBA range: start 0x80000 length 0x80000 00:12:46.363 Nvme2n1 : 5.09 1434.62 5.60 0.00 0.00 88479.25 11843.86 75800.67 00:12:46.363 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:46.363 Verification LBA range: start 0x0 length 0x80000 00:12:46.363 Nvme2n2 : 5.09 1483.74 5.80 0.00 0.00 85222.28 19687.12 63588.34 00:12:46.363 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:46.363 Verification LBA range: start 0x80000 length 0x80000 00:12:46.363 Nvme2n2 : 5.09 1434.01 5.60 0.00 0.00 88337.71 12686.09 74116.22 00:12:46.363 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:46.363 Verification LBA range: start 0x0 length 0x80000 00:12:46.363 Nvme2n3 : 5.09 1483.02 5.79 0.00 0.00 85103.11 20002.96 64851.69 00:12:46.363 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:46.363 Verification LBA range: start 0x80000 length 0x80000 00:12:46.363 Nvme2n3 : 5.09 1433.38 5.60 0.00 0.00 88201.21 13580.95 73273.99 00:12:46.363 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:46.363 Verification LBA range: start 0x0 length 0x20000 00:12:46.363 Nvme3n1 : 5.09 1482.32 5.79 0.00 0.00 84990.75 19476.56 68641.72 00:12:46.363 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:46.363 Verification LBA range: start 0x20000 length 0x20000 00:12:46.363 Nvme3n1 : 5.09 1432.70 5.60 0.00 0.00 88060.26 13159.84 77064.02 00:12:46.363 [2024-10-07T11:23:28.074Z] =================================================================================================================== 00:12:46.363 [2024-10-07T11:23:28.074Z] Total : 20404.59 79.71 0.00 0.00 87012.66 11791.22 96014.19 00:12:48.292 00:12:48.292 real 0m8.225s 00:12:48.292 user 0m14.976s 00:12:48.292 sys 0m0.352s 00:12:48.292 11:23:29 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:48.292 ************************************ 00:12:48.292 END TEST bdev_verify 00:12:48.292 ************************************ 00:12:48.292 11:23:29 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:12:48.292 11:23:29 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:48.292 11:23:29 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:12:48.292 11:23:29 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:48.292 11:23:29 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:48.292 ************************************ 00:12:48.292 START TEST bdev_verify_big_io 00:12:48.292 ************************************ 00:12:48.292 11:23:29 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:48.292 [2024-10-07 11:23:29.983788] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:12:48.292 [2024-10-07 11:23:29.983917] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63891 ] 00:12:48.565 [2024-10-07 11:23:30.165179] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:48.835 [2024-10-07 11:23:30.388322] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:48.835 [2024-10-07 11:23:30.388357] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:12:49.771 Running I/O for 5 seconds... 00:12:54.974 2315.00 IOPS, 144.69 MiB/s [2024-10-07T11:23:37.253Z] 3015.50 IOPS, 188.47 MiB/s [2024-10-07T11:23:37.253Z] 3636.33 IOPS, 227.27 MiB/s 00:12:55.542 Latency(us) 00:12:55.542 [2024-10-07T11:23:37.253Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:55.542 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:55.542 Verification LBA range: start 0x0 length 0xbd0b 00:12:55.542 Nvme0n1 : 5.66 125.90 7.87 0.00 0.00 982243.02 35584.21 1435159.44 00:12:55.542 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:55.542 Verification LBA range: start 0xbd0b length 0xbd0b 00:12:55.542 Nvme0n1 : 5.67 132.00 8.25 0.00 0.00 929693.88 22319.09 943297.29 00:12:55.542 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:55.542 Verification LBA range: start 0x0 length 0x4ff8 00:12:55.542 Nvme1n1p1 : 5.66 126.88 7.93 0.00 0.00 954818.12 51376.01 1448635.12 00:12:55.542 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:55.542 Verification LBA range: start 0x4ff8 length 0x4ff8 00:12:55.542 Nvme1n1p1 : 5.67 135.47 8.47 0.00 0.00 887778.94 60219.42 862443.23 00:12:55.542 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:55.542 Verification LBA range: start 0x0 length 0x4ff7 00:12:55.542 Nvme1n1p2 : 5.71 131.42 8.21 0.00 0.00 907175.04 45059.29 1475586.47 00:12:55.542 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:55.542 Verification LBA range: start 0x4ff7 length 0x4ff7 00:12:55.542 Nvme1n1p2 : 5.70 133.82 8.36 0.00 0.00 892491.96 108647.63 1361043.23 00:12:55.542 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:55.542 Verification LBA range: start 0x0 length 0x8000 00:12:55.542 Nvme2n1 : 5.71 130.67 8.17 0.00 0.00 887463.30 45690.96 1495799.98 00:12:55.542 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:55.542 Verification LBA range: start 0x8000 length 0x8000 00:12:55.542 Nvme2n1 : 5.70 136.59 8.54 0.00 0.00 850149.81 70747.30 916345.93 00:12:55.542 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:55.542 Verification LBA range: start 0x0 length 0x8000 00:12:55.542 Nvme2n2 : 5.76 137.22 8.58 0.00 0.00 829902.26 37900.34 1516013.49 00:12:55.542 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:55.542 Verification LBA range: start 0x8000 length 0x8000 00:12:55.542 Nvme2n2 : 5.77 151.57 9.47 0.00 0.00 757604.54 16844.59 845598.64 00:12:55.542 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:55.542 Verification LBA range: start 0x0 length 0x8000 00:12:55.542 Nvme2n3 : 5.77 141.50 8.84 0.00 0.00 787257.78 15370.69 1542964.84 00:12:55.542 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:55.542 Verification LBA range: start 0x8000 length 0x8000 00:12:55.542 Nvme2n3 : 5.77 151.46 9.47 0.00 0.00 740130.32 16107.64 859074.31 00:12:55.542 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:55.542 Verification LBA range: start 0x0 length 0x2000 00:12:55.542 Nvme3n1 : 5.83 177.34 11.08 0.00 0.00 617084.79 1677.88 855705.39 00:12:55.542 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:55.542 Verification LBA range: start 0x2000 length 0x2000 00:12:55.542 Nvme3n1 : 5.79 165.89 10.37 0.00 0.00 666440.37 7001.03 875918.91 00:12:55.542 [2024-10-07T11:23:37.253Z] =================================================================================================================== 00:12:55.542 [2024-10-07T11:23:37.253Z] Total : 1977.73 123.61 0.00 0.00 823571.20 1677.88 1542964.84 00:12:57.443 00:12:57.443 real 0m9.271s 00:12:57.443 user 0m17.054s 00:12:57.443 sys 0m0.352s 00:12:57.443 11:23:39 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:57.443 ************************************ 00:12:57.443 END TEST bdev_verify_big_io 00:12:57.443 11:23:39 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.443 ************************************ 00:12:57.701 11:23:39 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:57.702 11:23:39 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:12:57.702 11:23:39 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:57.702 11:23:39 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:57.702 ************************************ 00:12:57.702 START TEST bdev_write_zeroes 00:12:57.702 ************************************ 00:12:57.702 11:23:39 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:57.702 [2024-10-07 11:23:39.325526] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:12:57.702 [2024-10-07 11:23:39.325671] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64006 ] 00:12:57.960 [2024-10-07 11:23:39.501978] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:58.219 [2024-10-07 11:23:39.731408] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.785 Running I/O for 1 seconds... 00:13:00.169 61824.00 IOPS, 241.50 MiB/s 00:13:00.169 Latency(us) 00:13:00.169 [2024-10-07T11:23:41.880Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:00.169 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:00.169 Nvme0n1 : 1.03 8795.94 34.36 0.00 0.00 14520.65 10791.07 34531.42 00:13:00.169 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:00.169 Nvme1n1p1 : 1.03 8786.65 34.32 0.00 0.00 14517.48 10948.99 35373.65 00:13:00.169 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:00.169 Nvme1n1p2 : 1.03 8776.79 34.28 0.00 0.00 14485.81 10527.87 38110.89 00:13:00.169 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:00.169 Nvme2n1 : 1.03 8768.19 34.25 0.00 0.00 14412.36 10738.43 33899.75 00:13:00.169 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:00.169 Nvme2n2 : 1.03 8760.02 34.22 0.00 0.00 14368.62 10633.15 32846.96 00:13:00.169 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:00.169 Nvme2n3 : 1.03 8751.89 34.19 0.00 0.00 14332.00 10738.43 32846.96 00:13:00.169 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:00.169 Nvme3n1 : 1.03 8743.04 34.15 0.00 0.00 14304.37 9475.08 32215.29 00:13:00.169 [2024-10-07T11:23:41.880Z] =================================================================================================================== 00:13:00.169 [2024-10-07T11:23:41.880Z] Total : 61382.53 239.78 0.00 0.00 14420.18 9475.08 38110.89 00:13:01.546 00:13:01.546 real 0m3.613s 00:13:01.546 user 0m3.198s 00:13:01.546 sys 0m0.298s 00:13:01.546 11:23:42 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:01.546 ************************************ 00:13:01.546 END TEST bdev_write_zeroes 00:13:01.546 11:23:42 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:13:01.546 ************************************ 00:13:01.546 11:23:42 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:01.546 11:23:42 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:13:01.546 11:23:42 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:01.546 11:23:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:01.546 ************************************ 00:13:01.546 START TEST bdev_json_nonenclosed 00:13:01.546 ************************************ 00:13:01.546 11:23:42 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:01.546 [2024-10-07 11:23:43.013562] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:13:01.546 [2024-10-07 11:23:43.013709] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64064 ] 00:13:01.546 [2024-10-07 11:23:43.194309] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:01.805 [2024-10-07 11:23:43.412115] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.805 [2024-10-07 11:23:43.412219] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:01.805 [2024-10-07 11:23:43.412242] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:01.805 [2024-10-07 11:23:43.412255] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:02.373 00:13:02.373 real 0m0.932s 00:13:02.373 user 0m0.659s 00:13:02.373 sys 0m0.167s 00:13:02.373 11:23:43 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:02.373 11:23:43 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:13:02.373 ************************************ 00:13:02.373 END TEST bdev_json_nonenclosed 00:13:02.373 ************************************ 00:13:02.373 11:23:43 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:02.373 11:23:43 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:13:02.373 11:23:43 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:02.373 11:23:43 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:02.373 ************************************ 00:13:02.373 START TEST bdev_json_nonarray 00:13:02.373 ************************************ 00:13:02.373 11:23:43 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:02.373 [2024-10-07 11:23:44.018491] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:13:02.373 [2024-10-07 11:23:44.018847] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64095 ] 00:13:02.632 [2024-10-07 11:23:44.202119] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:02.891 [2024-10-07 11:23:44.418219] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.891 [2024-10-07 11:23:44.418324] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:02.891 [2024-10-07 11:23:44.418347] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:02.891 [2024-10-07 11:23:44.418361] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:03.150 00:13:03.150 real 0m0.930s 00:13:03.150 user 0m0.656s 00:13:03.150 sys 0m0.168s 00:13:03.150 11:23:44 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:03.150 11:23:44 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:13:03.150 ************************************ 00:13:03.150 END TEST bdev_json_nonarray 00:13:03.150 ************************************ 00:13:03.409 11:23:44 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:13:03.409 11:23:44 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:13:03.409 11:23:44 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:13:03.409 11:23:44 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:03.409 11:23:44 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:03.409 11:23:44 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:03.409 ************************************ 00:13:03.409 START TEST bdev_gpt_uuid 00:13:03.409 ************************************ 00:13:03.409 11:23:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1125 -- # bdev_gpt_uuid 00:13:03.409 11:23:44 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:13:03.409 11:23:44 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:13:03.409 11:23:44 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=64126 00:13:03.409 11:23:44 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:03.409 11:23:44 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:13:03.409 11:23:44 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 64126 00:13:03.409 11:23:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@831 -- # '[' -z 64126 ']' 00:13:03.409 11:23:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:03.409 11:23:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:03.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:03.409 11:23:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:03.409 11:23:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:03.409 11:23:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:13:03.409 [2024-10-07 11:23:45.020563] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:13:03.409 [2024-10-07 11:23:45.020693] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64126 ] 00:13:03.668 [2024-10-07 11:23:45.194846] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:03.926 [2024-10-07 11:23:45.409940] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:04.862 11:23:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:04.862 11:23:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # return 0 00:13:04.862 11:23:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:04.862 11:23:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.862 11:23:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:13:05.121 Some configs were skipped because the RPC state that can call them passed over. 00:13:05.121 11:23:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.121 11:23:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:13:05.121 11:23:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.121 11:23:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:13:05.121 11:23:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.121 11:23:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:13:05.121 11:23:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.121 11:23:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:13:05.121 11:23:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.121 11:23:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:13:05.121 { 00:13:05.121 "name": "Nvme1n1p1", 00:13:05.121 "aliases": [ 00:13:05.121 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:13:05.121 ], 00:13:05.121 "product_name": "GPT Disk", 00:13:05.121 "block_size": 4096, 00:13:05.121 "num_blocks": 655104, 00:13:05.121 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:13:05.121 "assigned_rate_limits": { 00:13:05.121 "rw_ios_per_sec": 0, 00:13:05.121 "rw_mbytes_per_sec": 0, 00:13:05.121 "r_mbytes_per_sec": 0, 00:13:05.121 "w_mbytes_per_sec": 0 00:13:05.121 }, 00:13:05.121 "claimed": false, 00:13:05.121 "zoned": false, 00:13:05.121 "supported_io_types": { 00:13:05.121 "read": true, 00:13:05.121 "write": true, 00:13:05.121 "unmap": true, 00:13:05.121 "flush": true, 00:13:05.121 "reset": true, 00:13:05.121 "nvme_admin": false, 00:13:05.121 "nvme_io": false, 00:13:05.121 "nvme_io_md": false, 00:13:05.121 "write_zeroes": true, 00:13:05.121 "zcopy": false, 00:13:05.121 "get_zone_info": false, 00:13:05.121 "zone_management": false, 00:13:05.121 "zone_append": false, 00:13:05.121 "compare": true, 00:13:05.121 "compare_and_write": false, 00:13:05.121 "abort": true, 00:13:05.121 "seek_hole": false, 00:13:05.121 "seek_data": false, 00:13:05.121 "copy": true, 00:13:05.121 "nvme_iov_md": false 00:13:05.121 }, 00:13:05.121 "driver_specific": { 00:13:05.121 "gpt": { 00:13:05.121 "base_bdev": "Nvme1n1", 00:13:05.121 "offset_blocks": 256, 00:13:05.121 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:13:05.121 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:13:05.121 "partition_name": "SPDK_TEST_first" 00:13:05.121 } 00:13:05.121 } 00:13:05.121 } 00:13:05.121 ]' 00:13:05.121 11:23:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:13:05.121 11:23:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:13:05.121 11:23:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:13:05.121 11:23:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:13:05.121 11:23:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:13:05.121 11:23:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:13:05.121 11:23:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:13:05.121 11:23:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.121 11:23:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:13:05.121 11:23:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.121 11:23:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:13:05.121 { 00:13:05.121 "name": "Nvme1n1p2", 00:13:05.121 "aliases": [ 00:13:05.121 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:13:05.121 ], 00:13:05.121 "product_name": "GPT Disk", 00:13:05.121 "block_size": 4096, 00:13:05.121 "num_blocks": 655103, 00:13:05.121 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:13:05.121 "assigned_rate_limits": { 00:13:05.121 "rw_ios_per_sec": 0, 00:13:05.121 "rw_mbytes_per_sec": 0, 00:13:05.121 "r_mbytes_per_sec": 0, 00:13:05.121 "w_mbytes_per_sec": 0 00:13:05.121 }, 00:13:05.121 "claimed": false, 00:13:05.121 "zoned": false, 00:13:05.121 "supported_io_types": { 00:13:05.121 "read": true, 00:13:05.121 "write": true, 00:13:05.121 "unmap": true, 00:13:05.121 "flush": true, 00:13:05.121 "reset": true, 00:13:05.121 "nvme_admin": false, 00:13:05.121 "nvme_io": false, 00:13:05.121 "nvme_io_md": false, 00:13:05.121 "write_zeroes": true, 00:13:05.121 "zcopy": false, 00:13:05.121 "get_zone_info": false, 00:13:05.121 "zone_management": false, 00:13:05.121 "zone_append": false, 00:13:05.121 "compare": true, 00:13:05.121 "compare_and_write": false, 00:13:05.121 "abort": true, 00:13:05.121 "seek_hole": false, 00:13:05.121 "seek_data": false, 00:13:05.121 "copy": true, 00:13:05.121 "nvme_iov_md": false 00:13:05.121 }, 00:13:05.121 "driver_specific": { 00:13:05.121 "gpt": { 00:13:05.121 "base_bdev": "Nvme1n1", 00:13:05.121 "offset_blocks": 655360, 00:13:05.121 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:13:05.121 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:13:05.121 "partition_name": "SPDK_TEST_second" 00:13:05.121 } 00:13:05.121 } 00:13:05.121 } 00:13:05.121 ]' 00:13:05.121 11:23:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:13:05.380 11:23:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:13:05.380 11:23:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:13:05.380 11:23:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:13:05.380 11:23:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:13:05.380 11:23:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:13:05.380 11:23:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 64126 00:13:05.380 11:23:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@950 -- # '[' -z 64126 ']' 00:13:05.380 11:23:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # kill -0 64126 00:13:05.380 11:23:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # uname 00:13:05.380 11:23:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:05.380 11:23:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64126 00:13:05.380 11:23:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:05.380 11:23:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:05.380 killing process with pid 64126 00:13:05.380 11:23:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64126' 00:13:05.380 11:23:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@969 -- # kill 64126 00:13:05.380 11:23:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@974 -- # wait 64126 00:13:07.915 00:13:07.915 real 0m4.627s 00:13:07.915 user 0m4.739s 00:13:07.915 sys 0m0.570s 00:13:07.915 11:23:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:07.915 11:23:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:13:07.915 ************************************ 00:13:07.915 END TEST bdev_gpt_uuid 00:13:07.915 ************************************ 00:13:07.915 11:23:49 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:13:07.915 11:23:49 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:13:07.915 11:23:49 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:13:07.915 11:23:49 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:13:07.915 11:23:49 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:07.915 11:23:49 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:13:07.915 11:23:49 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:13:07.915 11:23:49 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:13:07.915 11:23:49 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:08.482 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:08.747 Waiting for block devices as requested 00:13:09.009 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:09.009 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:09.267 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:09.267 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:14.575 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:14.575 11:23:55 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:13:14.575 11:23:55 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:13:14.575 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:13:14.575 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:13:14.575 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:13:14.575 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:13:14.575 11:23:56 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:13:14.575 00:13:14.575 real 1m9.222s 00:13:14.575 user 1m25.371s 00:13:14.575 sys 0m13.023s 00:13:14.575 11:23:56 blockdev_nvme_gpt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:14.575 11:23:56 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:14.575 ************************************ 00:13:14.575 END TEST blockdev_nvme_gpt 00:13:14.575 ************************************ 00:13:14.833 11:23:56 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:13:14.833 11:23:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:14.833 11:23:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:14.833 11:23:56 -- common/autotest_common.sh@10 -- # set +x 00:13:14.833 ************************************ 00:13:14.833 START TEST nvme 00:13:14.833 ************************************ 00:13:14.833 11:23:56 nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:13:14.833 * Looking for test storage... 00:13:14.834 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:14.834 11:23:56 nvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:14.834 11:23:56 nvme -- common/autotest_common.sh@1681 -- # lcov --version 00:13:14.834 11:23:56 nvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:14.834 11:23:56 nvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:14.834 11:23:56 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:14.834 11:23:56 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:14.834 11:23:56 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:14.834 11:23:56 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:13:14.834 11:23:56 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:13:14.834 11:23:56 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:13:14.834 11:23:56 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:13:14.834 11:23:56 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:13:14.834 11:23:56 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:13:14.834 11:23:56 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:13:14.834 11:23:56 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:14.834 11:23:56 nvme -- scripts/common.sh@344 -- # case "$op" in 00:13:14.834 11:23:56 nvme -- scripts/common.sh@345 -- # : 1 00:13:14.834 11:23:56 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:14.834 11:23:56 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:14.834 11:23:56 nvme -- scripts/common.sh@365 -- # decimal 1 00:13:14.834 11:23:56 nvme -- scripts/common.sh@353 -- # local d=1 00:13:14.834 11:23:56 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:14.834 11:23:56 nvme -- scripts/common.sh@355 -- # echo 1 00:13:14.834 11:23:56 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:13:14.834 11:23:56 nvme -- scripts/common.sh@366 -- # decimal 2 00:13:14.834 11:23:56 nvme -- scripts/common.sh@353 -- # local d=2 00:13:14.834 11:23:56 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:14.834 11:23:56 nvme -- scripts/common.sh@355 -- # echo 2 00:13:14.834 11:23:56 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:13:14.834 11:23:56 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:14.834 11:23:56 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:14.834 11:23:56 nvme -- scripts/common.sh@368 -- # return 0 00:13:14.834 11:23:56 nvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:14.834 11:23:56 nvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:14.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.834 --rc genhtml_branch_coverage=1 00:13:14.834 --rc genhtml_function_coverage=1 00:13:14.834 --rc genhtml_legend=1 00:13:14.834 --rc geninfo_all_blocks=1 00:13:14.834 --rc geninfo_unexecuted_blocks=1 00:13:14.834 00:13:14.834 ' 00:13:14.834 11:23:56 nvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:14.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.834 --rc genhtml_branch_coverage=1 00:13:14.834 --rc genhtml_function_coverage=1 00:13:14.834 --rc genhtml_legend=1 00:13:14.834 --rc geninfo_all_blocks=1 00:13:14.834 --rc geninfo_unexecuted_blocks=1 00:13:14.834 00:13:14.834 ' 00:13:14.834 11:23:56 nvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:14.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.834 --rc genhtml_branch_coverage=1 00:13:14.834 --rc genhtml_function_coverage=1 00:13:14.834 --rc genhtml_legend=1 00:13:14.834 --rc geninfo_all_blocks=1 00:13:14.834 --rc geninfo_unexecuted_blocks=1 00:13:14.834 00:13:14.834 ' 00:13:14.834 11:23:56 nvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:14.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.834 --rc genhtml_branch_coverage=1 00:13:14.834 --rc genhtml_function_coverage=1 00:13:14.834 --rc genhtml_legend=1 00:13:14.834 --rc geninfo_all_blocks=1 00:13:14.834 --rc geninfo_unexecuted_blocks=1 00:13:14.834 00:13:14.834 ' 00:13:14.834 11:23:56 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:15.768 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:16.336 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:16.336 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:16.336 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:16.595 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:16.595 11:23:58 nvme -- nvme/nvme.sh@79 -- # uname 00:13:16.595 11:23:58 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:13:16.595 11:23:58 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:13:16.595 11:23:58 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:13:16.595 11:23:58 nvme -- common/autotest_common.sh@1082 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:13:16.595 11:23:58 nvme -- common/autotest_common.sh@1068 -- # _randomize_va_space=2 00:13:16.595 11:23:58 nvme -- common/autotest_common.sh@1069 -- # echo 0 00:13:16.595 Waiting for stub to ready for secondary processes... 00:13:16.595 11:23:58 nvme -- common/autotest_common.sh@1071 -- # stubpid=64788 00:13:16.595 11:23:58 nvme -- common/autotest_common.sh@1070 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:13:16.595 11:23:58 nvme -- common/autotest_common.sh@1072 -- # echo Waiting for stub to ready for secondary processes... 00:13:16.595 11:23:58 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:13:16.595 11:23:58 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/64788 ]] 00:13:16.595 11:23:58 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:13:16.595 [2024-10-07 11:23:58.257041] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:13:16.595 [2024-10-07 11:23:58.257180] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:13:17.533 11:23:59 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:13:17.533 11:23:59 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/64788 ]] 00:13:17.533 11:23:59 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:13:17.793 [2024-10-07 11:23:59.270078] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:17.793 [2024-10-07 11:23:59.477105] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:13:17.793 [2024-10-07 11:23:59.477245] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:13:17.793 [2024-10-07 11:23:59.477287] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:13:17.793 [2024-10-07 11:23:59.494291] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:13:17.793 [2024-10-07 11:23:59.494483] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:13:18.053 [2024-10-07 11:23:59.513021] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:13:18.053 [2024-10-07 11:23:59.513471] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:13:18.053 [2024-10-07 11:23:59.519873] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:13:18.053 [2024-10-07 11:23:59.520451] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:13:18.053 [2024-10-07 11:23:59.521068] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:13:18.053 [2024-10-07 11:23:59.525431] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:13:18.053 [2024-10-07 11:23:59.525866] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:13:18.053 [2024-10-07 11:23:59.526165] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:13:18.053 [2024-10-07 11:23:59.530080] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:13:18.053 [2024-10-07 11:23:59.530379] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:13:18.053 [2024-10-07 11:23:59.530464] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:13:18.054 [2024-10-07 11:23:59.530526] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:13:18.054 [2024-10-07 11:23:59.530582] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:13:18.626 done. 00:13:18.626 11:24:00 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:13:18.626 11:24:00 nvme -- common/autotest_common.sh@1078 -- # echo done. 00:13:18.626 11:24:00 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:13:18.626 11:24:00 nvme -- common/autotest_common.sh@1101 -- # '[' 10 -le 1 ']' 00:13:18.626 11:24:00 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:18.626 11:24:00 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:18.626 ************************************ 00:13:18.626 START TEST nvme_reset 00:13:18.626 ************************************ 00:13:18.626 11:24:00 nvme.nvme_reset -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:13:18.884 Initializing NVMe Controllers 00:13:18.884 Skipping QEMU NVMe SSD at 0000:00:10.0 00:13:18.884 Skipping QEMU NVMe SSD at 0000:00:11.0 00:13:18.884 Skipping QEMU NVMe SSD at 0000:00:13.0 00:13:18.884 Skipping QEMU NVMe SSD at 0000:00:12.0 00:13:18.884 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:13:18.884 00:13:18.884 real 0m0.292s 00:13:18.884 user 0m0.097s 00:13:18.884 sys 0m0.151s 00:13:18.884 11:24:00 nvme.nvme_reset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:18.884 ************************************ 00:13:18.884 END TEST nvme_reset 00:13:18.884 ************************************ 00:13:18.884 11:24:00 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:13:18.884 11:24:00 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:13:18.884 11:24:00 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:18.884 11:24:00 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:18.884 11:24:00 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:19.142 ************************************ 00:13:19.142 START TEST nvme_identify 00:13:19.142 ************************************ 00:13:19.142 11:24:00 nvme.nvme_identify -- common/autotest_common.sh@1125 -- # nvme_identify 00:13:19.142 11:24:00 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:13:19.142 11:24:00 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:13:19.142 11:24:00 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:13:19.142 11:24:00 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:13:19.142 11:24:00 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # bdfs=() 00:13:19.142 11:24:00 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # local bdfs 00:13:19.142 11:24:00 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:13:19.142 11:24:00 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:19.142 11:24:00 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:13:19.142 11:24:00 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:13:19.142 11:24:00 nvme.nvme_identify -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:13:19.142 11:24:00 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:13:19.403 ===================================================== 00:13:19.403 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:19.404 ===================================================== 00:13:19.404 Controller Capabilities/Features 00:13:19.404 ================================ 00:13:19.404 Vendor ID: 1b36 00:13:19.404 Subsystem Vendor ID: 1af4 00:13:19.404 Serial Number: 12340 00:13:19.404 Model Number: QEMU NVMe Ctrl 00:13:19.404 Firmware Version: 8.0.0 00:13:19.404 Recommended Arb Burst: 6 00:13:19.404 IEEE OUI Identifier: 00 54 52 00:13:19.404 Multi-path I/O 00:13:19.404 May have multiple subsystem ports: No 00:13:19.404 May have multiple controllers: No 00:13:19.404 Associated with SR-IOV VF: No 00:13:19.404 Max Data Transfer Size: 524288 00:13:19.404 Max Number of Namespaces: 256 00:13:19.404 Max Number of I/O Queues: 64 00:13:19.404 NVMe Specification Version (VS): 1.4 00:13:19.404 NVMe Specification Version (Identify): 1.4 00:13:19.404 Maximum Queue Entries: 2048 00:13:19.404 Contiguous Queues Required: Yes 00:13:19.404 Arbitration Mechanisms Supported 00:13:19.404 Weighted Round Robin: Not Supported 00:13:19.404 Vendor Specific: Not Supported 00:13:19.404 Reset Timeout: 7500 ms 00:13:19.404 Doorbell Stride: 4 bytes 00:13:19.404 NVM Subsystem Reset: Not Supported 00:13:19.404 Command Sets Supported 00:13:19.404 NVM Command Set: Supported 00:13:19.404 Boot Partition: Not Supported 00:13:19.404 Memory Page Size Minimum: 4096 bytes 00:13:19.404 Memory Page Size Maximum: 65536 bytes 00:13:19.404 Persistent Memory Region: Not Supported 00:13:19.404 Optional Asynchronous Events Supported 00:13:19.404 Namespace Attribute Notices: Supported 00:13:19.404 Firmware Activation Notices: Not Supported 00:13:19.404 ANA Change Notices: Not Supported 00:13:19.404 PLE Aggregate Log Change Notices: Not Supported 00:13:19.404 LBA Status Info Alert Notices: Not Supported 00:13:19.404 EGE Aggregate Log Change Notices: Not Supported 00:13:19.404 Normal NVM Subsystem Shutdown event: Not Supported 00:13:19.404 Zone Descriptor Change Notices: Not Supported 00:13:19.404 Discovery Log Change Notices: Not Supported 00:13:19.404 Controller Attributes 00:13:19.404 128-bit Host Identifier: Not Supported 00:13:19.404 Non-Operational Permissive Mode: Not Supported 00:13:19.404 NVM Sets: Not Supported 00:13:19.404 Read Recovery Levels: Not Supported 00:13:19.404 Endurance Groups: Not Supported 00:13:19.404 Predictable Latency Mode: Not Supported 00:13:19.404 Traffic Based Keep ALive: Not Supported 00:13:19.404 Namespace Granularity: Not Supported 00:13:19.404 SQ Associations: Not Supported 00:13:19.404 UUID List: Not Supported 00:13:19.404 Multi-Domain Subsystem: Not Supported 00:13:19.404 Fixed Capacity Management: Not Supported 00:13:19.404 Variable Capacity Management: Not Supported 00:13:19.404 Delete Endurance Group: Not Supported 00:13:19.404 Delete NVM Set: Not Supported 00:13:19.404 Extended LBA Formats Supported: Supported 00:13:19.404 Flexible Data Placement Supported: Not Supported 00:13:19.404 00:13:19.404 Controller Memory Buffer Support 00:13:19.404 ================================ 00:13:19.404 Supported: No 00:13:19.404 00:13:19.404 Persistent Memory Region Support 00:13:19.404 ================================ 00:13:19.404 Supported: No 00:13:19.404 00:13:19.404 Admin Command Set Attributes 00:13:19.404 ============================ 00:13:19.404 Security Send/Receive: Not Supported 00:13:19.404 Format NVM: Supported 00:13:19.404 Firmware Activate/Download: Not Supported 00:13:19.404 Namespace Management: Supported 00:13:19.404 Device Self-Test: Not Supported 00:13:19.404 Directives: Supported 00:13:19.404 NVMe-MI: Not Supported 00:13:19.404 Virtualization Management: Not Supported 00:13:19.404 Doorbell Buffer Config: Supported 00:13:19.404 Get LBA Status Capability: Not Supported 00:13:19.404 Command & Feature Lockdown Capability: Not Supported 00:13:19.404 Abort Command Limit: 4 00:13:19.404 Async Event Request Limit: 4 00:13:19.404 Number of Firmware Slots: N/A 00:13:19.404 Firmware Slot 1 Read-Only: N/A 00:13:19.404 Firmware Activation Without Reset: N/A 00:13:19.404 Multiple Update Detection Support: N/A 00:13:19.404 Firmware Update Granularity: No Information Provided 00:13:19.404 Per-Namespace SMART Log: Yes 00:13:19.404 Asymmetric Namespace Access Log Page: Not Supported 00:13:19.404 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:13:19.404 Command Effects Log Page: Supported 00:13:19.404 Get Log Page Extended Data: Supported 00:13:19.404 Telemetry Log Pages: Not Supported 00:13:19.404 Persistent Event Log Pages: Not Supported 00:13:19.404 Supported Log Pages Log Page: May Support 00:13:19.404 Commands Supported & Effects Log Page: Not Supported 00:13:19.404 Feature Identifiers & Effects Log Page:May Support 00:13:19.404 NVMe-MI Commands & Effects Log Page: May Support 00:13:19.404 Data Area 4 for Telemetry Log: Not Supported 00:13:19.404 Error Log Page Entries Supported: 1 00:13:19.404 Keep Alive: Not Supported 00:13:19.404 00:13:19.404 NVM Command Set Attributes 00:13:19.404 ========================== 00:13:19.404 Submission Queue Entry Size 00:13:19.404 Max: 64 00:13:19.404 Min: 64 00:13:19.404 Completion Queue Entry Size 00:13:19.404 Max: 16 00:13:19.404 Min: 16 00:13:19.404 Number of Namespaces: 256 00:13:19.404 Compare Command: Supported 00:13:19.404 Write Uncorrectable Command: Not Supported 00:13:19.404 Dataset Management Command: Supported 00:13:19.404 Write Zeroes Command: Supported 00:13:19.404 Set Features Save Field: Supported 00:13:19.404 Reservations: Not Supported 00:13:19.404 Timestamp: Supported 00:13:19.404 Copy: Supported 00:13:19.404 Volatile Write Cache: Present 00:13:19.404 Atomic Write Unit (Normal): 1 00:13:19.404 Atomic Write Unit (PFail): 1 00:13:19.404 Atomic Compare & Write Unit: 1 00:13:19.404 Fused Compare & Write: Not Supported 00:13:19.404 Scatter-Gather List 00:13:19.404 SGL Command Set: Supported 00:13:19.404 SGL Keyed: Not Supported 00:13:19.404 SGL Bit Bucket Descriptor: Not Supported 00:13:19.404 SGL Metadata Pointer: Not Supported 00:13:19.404 Oversized SGL: Not Supported 00:13:19.404 SGL Metadata Address: Not Supported 00:13:19.404 SGL Offset: Not Supported 00:13:19.404 Transport SGL Data Block: Not Supported 00:13:19.404 Replay Protected Memory Block: Not Supported 00:13:19.404 00:13:19.404 Firmware Slot Information 00:13:19.404 ========================= 00:13:19.404 Active slot: 1 00:13:19.404 Slot 1 Firmware Revision: 1.0 00:13:19.404 00:13:19.404 00:13:19.404 Commands Supported and Effects 00:13:19.404 ============================== 00:13:19.404 Admin Commands 00:13:19.404 -------------- 00:13:19.404 Delete I/O Submission Queue (00h): Supported 00:13:19.404 Create I/O Submission Queue (01h): Supported 00:13:19.404 Get Log Page (02h): Supported 00:13:19.404 Delete I/O Completion Queue (04h): Supported 00:13:19.404 Create I/O Completion Queue (05h): Supported 00:13:19.404 Identify (06h): Supported 00:13:19.404 Abort (08h): Supported 00:13:19.404 Set Features (09h): Supported 00:13:19.404 Get Features (0Ah): Supported 00:13:19.404 Asynchronous Event Request (0Ch): Supported 00:13:19.404 Namespace Attachment (15h): Supported NS-Inventory-Change 00:13:19.404 Directive Send (19h): Supported 00:13:19.404 Directive Receive (1Ah): Supported 00:13:19.404 Virtualization Management (1Ch): Supported 00:13:19.404 Doorbell Buffer Config (7Ch): Supported 00:13:19.404 Format NVM (80h): Supported LBA-Change 00:13:19.404 I/O Commands 00:13:19.404 ------------ 00:13:19.404 Flush (00h): Supported LBA-Change 00:13:19.404 Write (01h): Supported LBA-Change 00:13:19.404 Read (02h): Supported 00:13:19.404 Compare (05h): Supported 00:13:19.404 Write Zeroes (08h): Supported LBA-Change 00:13:19.404 Dataset Management (09h): Supported LBA-Change 00:13:19.404 Unknown (0Ch): Supported 00:13:19.404 Unknown (12h): Supported 00:13:19.404 Copy (19h): Supported LBA-Change 00:13:19.404 Unknown (1Dh): Supported LBA-Change 00:13:19.404 00:13:19.404 Error Log 00:13:19.404 ========= 00:13:19.404 00:13:19.404 Arbitration 00:13:19.404 =========== 00:13:19.404 Arbitration Burst: no limit 00:13:19.404 00:13:19.404 Power Management 00:13:19.404 ================ 00:13:19.404 Number of Power States: 1 00:13:19.404 Current Power State: Power State #0 00:13:19.404 Power State #0: 00:13:19.404 Max Power: 25.00 W 00:13:19.404 Non-Operational State: Operational 00:13:19.404 Entry Latency: 16 microseconds 00:13:19.404 Exit Latency: 4 microseconds 00:13:19.404 Relative Read Throughput: 0 00:13:19.404 Relative Read Latency: 0 00:13:19.404 Relative Write Throughput: 0 00:13:19.404 Relative Write Latency: 0 00:13:19.404 Idle Power[2024-10-07 11:24:00.977008] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 64823 terminated unexpected 00:13:19.404 [2024-10-07 11:24:00.978067] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0] process 64823 terminated unexpected 00:13:19.404 : Not Reported 00:13:19.404 Active Power: Not Reported 00:13:19.404 Non-Operational Permissive Mode: Not Supported 00:13:19.404 00:13:19.405 Health Information 00:13:19.405 ================== 00:13:19.405 Critical Warnings: 00:13:19.405 Available Spare Space: OK 00:13:19.405 Temperature: OK 00:13:19.405 Device Reliability: OK 00:13:19.405 Read Only: No 00:13:19.405 Volatile Memory Backup: OK 00:13:19.405 Current Temperature: 323 Kelvin (50 Celsius) 00:13:19.405 Temperature Threshold: 343 Kelvin (70 Celsius) 00:13:19.405 Available Spare: 0% 00:13:19.405 Available Spare Threshold: 0% 00:13:19.405 Life Percentage Used: 0% 00:13:19.405 Data Units Read: 746 00:13:19.405 Data Units Written: 674 00:13:19.405 Host Read Commands: 36928 00:13:19.405 Host Write Commands: 36714 00:13:19.405 Controller Busy Time: 0 minutes 00:13:19.405 Power Cycles: 0 00:13:19.405 Power On Hours: 0 hours 00:13:19.405 Unsafe Shutdowns: 0 00:13:19.405 Unrecoverable Media Errors: 0 00:13:19.405 Lifetime Error Log Entries: 0 00:13:19.405 Warning Temperature Time: 0 minutes 00:13:19.405 Critical Temperature Time: 0 minutes 00:13:19.405 00:13:19.405 Number of Queues 00:13:19.405 ================ 00:13:19.405 Number of I/O Submission Queues: 64 00:13:19.405 Number of I/O Completion Queues: 64 00:13:19.405 00:13:19.405 ZNS Specific Controller Data 00:13:19.405 ============================ 00:13:19.405 Zone Append Size Limit: 0 00:13:19.405 00:13:19.405 00:13:19.405 Active Namespaces 00:13:19.405 ================= 00:13:19.405 Namespace ID:1 00:13:19.405 Error Recovery Timeout: Unlimited 00:13:19.405 Command Set Identifier: NVM (00h) 00:13:19.405 Deallocate: Supported 00:13:19.405 Deallocated/Unwritten Error: Supported 00:13:19.405 Deallocated Read Value: All 0x00 00:13:19.405 Deallocate in Write Zeroes: Not Supported 00:13:19.405 Deallocated Guard Field: 0xFFFF 00:13:19.405 Flush: Supported 00:13:19.405 Reservation: Not Supported 00:13:19.405 Metadata Transferred as: Separate Metadata Buffer 00:13:19.405 Namespace Sharing Capabilities: Private 00:13:19.405 Size (in LBAs): 1548666 (5GiB) 00:13:19.405 Capacity (in LBAs): 1548666 (5GiB) 00:13:19.405 Utilization (in LBAs): 1548666 (5GiB) 00:13:19.405 Thin Provisioning: Not Supported 00:13:19.405 Per-NS Atomic Units: No 00:13:19.405 Maximum Single Source Range Length: 128 00:13:19.405 Maximum Copy Length: 128 00:13:19.405 Maximum Source Range Count: 128 00:13:19.405 NGUID/EUI64 Never Reused: No 00:13:19.405 Namespace Write Protected: No 00:13:19.405 Number of LBA Formats: 8 00:13:19.405 Current LBA Format: LBA Format #07 00:13:19.405 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:19.405 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:19.405 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:19.405 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:19.405 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:19.405 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:19.405 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:19.405 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:19.405 00:13:19.405 NVM Specific Namespace Data 00:13:19.405 =========================== 00:13:19.405 Logical Block Storage Tag Mask: 0 00:13:19.405 Protection Information Capabilities: 00:13:19.405 16b Guard Protection Information Storage Tag Support: No 00:13:19.405 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:19.405 Storage Tag Check Read Support: No 00:13:19.405 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:19.405 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:19.405 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:19.405 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:19.405 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:19.405 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:19.405 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:19.405 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:19.405 ===================================================== 00:13:19.405 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:19.405 ===================================================== 00:13:19.405 Controller Capabilities/Features 00:13:19.405 ================================ 00:13:19.405 Vendor ID: 1b36 00:13:19.405 Subsystem Vendor ID: 1af4 00:13:19.405 Serial Number: 12341 00:13:19.405 Model Number: QEMU NVMe Ctrl 00:13:19.405 Firmware Version: 8.0.0 00:13:19.405 Recommended Arb Burst: 6 00:13:19.405 IEEE OUI Identifier: 00 54 52 00:13:19.405 Multi-path I/O 00:13:19.405 May have multiple subsystem ports: No 00:13:19.405 May have multiple controllers: No 00:13:19.405 Associated with SR-IOV VF: No 00:13:19.405 Max Data Transfer Size: 524288 00:13:19.405 Max Number of Namespaces: 256 00:13:19.405 Max Number of I/O Queues: 64 00:13:19.405 NVMe Specification Version (VS): 1.4 00:13:19.405 NVMe Specification Version (Identify): 1.4 00:13:19.405 Maximum Queue Entries: 2048 00:13:19.405 Contiguous Queues Required: Yes 00:13:19.405 Arbitration Mechanisms Supported 00:13:19.405 Weighted Round Robin: Not Supported 00:13:19.405 Vendor Specific: Not Supported 00:13:19.405 Reset Timeout: 7500 ms 00:13:19.405 Doorbell Stride: 4 bytes 00:13:19.405 NVM Subsystem Reset: Not Supported 00:13:19.405 Command Sets Supported 00:13:19.405 NVM Command Set: Supported 00:13:19.405 Boot Partition: Not Supported 00:13:19.405 Memory Page Size Minimum: 4096 bytes 00:13:19.405 Memory Page Size Maximum: 65536 bytes 00:13:19.405 Persistent Memory Region: Not Supported 00:13:19.405 Optional Asynchronous Events Supported 00:13:19.405 Namespace Attribute Notices: Supported 00:13:19.405 Firmware Activation Notices: Not Supported 00:13:19.405 ANA Change Notices: Not Supported 00:13:19.405 PLE Aggregate Log Change Notices: Not Supported 00:13:19.405 LBA Status Info Alert Notices: Not Supported 00:13:19.405 EGE Aggregate Log Change Notices: Not Supported 00:13:19.405 Normal NVM Subsystem Shutdown event: Not Supported 00:13:19.405 Zone Descriptor Change Notices: Not Supported 00:13:19.405 Discovery Log Change Notices: Not Supported 00:13:19.405 Controller Attributes 00:13:19.405 128-bit Host Identifier: Not Supported 00:13:19.405 Non-Operational Permissive Mode: Not Supported 00:13:19.405 NVM Sets: Not Supported 00:13:19.405 Read Recovery Levels: Not Supported 00:13:19.405 Endurance Groups: Not Supported 00:13:19.405 Predictable Latency Mode: Not Supported 00:13:19.405 Traffic Based Keep ALive: Not Supported 00:13:19.405 Namespace Granularity: Not Supported 00:13:19.405 SQ Associations: Not Supported 00:13:19.405 UUID List: Not Supported 00:13:19.405 Multi-Domain Subsystem: Not Supported 00:13:19.405 Fixed Capacity Management: Not Supported 00:13:19.405 Variable Capacity Management: Not Supported 00:13:19.405 Delete Endurance Group: Not Supported 00:13:19.405 Delete NVM Set: Not Supported 00:13:19.405 Extended LBA Formats Supported: Supported 00:13:19.405 Flexible Data Placement Supported: Not Supported 00:13:19.405 00:13:19.405 Controller Memory Buffer Support 00:13:19.405 ================================ 00:13:19.405 Supported: No 00:13:19.405 00:13:19.405 Persistent Memory Region Support 00:13:19.405 ================================ 00:13:19.405 Supported: No 00:13:19.405 00:13:19.405 Admin Command Set Attributes 00:13:19.405 ============================ 00:13:19.405 Security Send/Receive: Not Supported 00:13:19.405 Format NVM: Supported 00:13:19.405 Firmware Activate/Download: Not Supported 00:13:19.405 Namespace Management: Supported 00:13:19.405 Device Self-Test: Not Supported 00:13:19.405 Directives: Supported 00:13:19.405 NVMe-MI: Not Supported 00:13:19.405 Virtualization Management: Not Supported 00:13:19.405 Doorbell Buffer Config: Supported 00:13:19.405 Get LBA Status Capability: Not Supported 00:13:19.405 Command & Feature Lockdown Capability: Not Supported 00:13:19.405 Abort Command Limit: 4 00:13:19.405 Async Event Request Limit: 4 00:13:19.405 Number of Firmware Slots: N/A 00:13:19.405 Firmware Slot 1 Read-Only: N/A 00:13:19.405 Firmware Activation Without Reset: N/A 00:13:19.405 Multiple Update Detection Support: N/A 00:13:19.405 Firmware Update Granularity: No Information Provided 00:13:19.405 Per-Namespace SMART Log: Yes 00:13:19.405 Asymmetric Namespace Access Log Page: Not Supported 00:13:19.405 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:13:19.405 Command Effects Log Page: Supported 00:13:19.405 Get Log Page Extended Data: Supported 00:13:19.405 Telemetry Log Pages: Not Supported 00:13:19.405 Persistent Event Log Pages: Not Supported 00:13:19.405 Supported Log Pages Log Page: May Support 00:13:19.405 Commands Supported & Effects Log Page: Not Supported 00:13:19.405 Feature Identifiers & Effects Log Page:May Support 00:13:19.405 NVMe-MI Commands & Effects Log Page: May Support 00:13:19.405 Data Area 4 for Telemetry Log: Not Supported 00:13:19.405 Error Log Page Entries Supported: 1 00:13:19.405 Keep Alive: Not Supported 00:13:19.405 00:13:19.405 NVM Command Set Attributes 00:13:19.405 ========================== 00:13:19.405 Submission Queue Entry Size 00:13:19.406 Max: 64 00:13:19.406 Min: 64 00:13:19.406 Completion Queue Entry Size 00:13:19.406 Max: 16 00:13:19.406 Min: 16 00:13:19.406 Number of Namespaces: 256 00:13:19.406 Compare Command: Supported 00:13:19.406 Write Uncorrectable Command: Not Supported 00:13:19.406 Dataset Management Command: Supported 00:13:19.406 Write Zeroes Command: Supported 00:13:19.406 Set Features Save Field: Supported 00:13:19.406 Reservations: Not Supported 00:13:19.406 Timestamp: Supported 00:13:19.406 Copy: Supported 00:13:19.406 Volatile Write Cache: Present 00:13:19.406 Atomic Write Unit (Normal): 1 00:13:19.406 Atomic Write Unit (PFail): 1 00:13:19.406 Atomic Compare & Write Unit: 1 00:13:19.406 Fused Compare & Write: Not Supported 00:13:19.406 Scatter-Gather List 00:13:19.406 SGL Command Set: Supported 00:13:19.406 SGL Keyed: Not Supported 00:13:19.406 SGL Bit Bucket Descriptor: Not Supported 00:13:19.406 SGL Metadata Pointer: Not Supported 00:13:19.406 Oversized SGL: Not Supported 00:13:19.406 SGL Metadata Address: Not Supported 00:13:19.406 SGL Offset: Not Supported 00:13:19.406 Transport SGL Data Block: Not Supported 00:13:19.406 Replay Protected Memory Block: Not Supported 00:13:19.406 00:13:19.406 Firmware Slot Information 00:13:19.406 ========================= 00:13:19.406 Active slot: 1 00:13:19.406 Slot 1 Firmware Revision: 1.0 00:13:19.406 00:13:19.406 00:13:19.406 Commands Supported and Effects 00:13:19.406 ============================== 00:13:19.406 Admin Commands 00:13:19.406 -------------- 00:13:19.406 Delete I/O Submission Queue (00h): Supported 00:13:19.406 Create I/O Submission Queue (01h): Supported 00:13:19.406 Get Log Page (02h): Supported 00:13:19.406 Delete I/O Completion Queue (04h): Supported 00:13:19.406 Create I/O Completion Queue (05h): Supported 00:13:19.406 Identify (06h): Supported 00:13:19.406 Abort (08h): Supported 00:13:19.406 Set Features (09h): Supported 00:13:19.406 Get Features (0Ah): Supported 00:13:19.406 Asynchronous Event Request (0Ch): Supported 00:13:19.406 Namespace Attachment (15h): Supported NS-Inventory-Change 00:13:19.406 Directive Send (19h): Supported 00:13:19.406 Directive Receive (1Ah): Supported 00:13:19.406 Virtualization Management (1Ch): Supported 00:13:19.406 Doorbell Buffer Config (7Ch): Supported 00:13:19.406 Format NVM (80h): Supported LBA-Change 00:13:19.406 I/O Commands 00:13:19.406 ------------ 00:13:19.406 Flush (00h): Supported LBA-Change 00:13:19.406 Write (01h): Supported LBA-Change 00:13:19.406 Read (02h): Supported 00:13:19.406 Compare (05h): Supported 00:13:19.406 Write Zeroes (08h): Supported LBA-Change 00:13:19.406 Dataset Management (09h): Supported LBA-Change 00:13:19.406 Unknown (0Ch): Supported 00:13:19.406 Unknown (12h): Supported 00:13:19.406 Copy (19h): Supported LBA-Change 00:13:19.406 Unknown (1Dh): Supported LBA-Change 00:13:19.406 00:13:19.406 Error Log 00:13:19.406 ========= 00:13:19.406 00:13:19.406 Arbitration 00:13:19.406 =========== 00:13:19.406 Arbitration Burst: no limit 00:13:19.406 00:13:19.406 Power Management 00:13:19.406 ================ 00:13:19.406 Number of Power States: 1 00:13:19.406 Current Power State: Power State #0 00:13:19.406 Power State #0: 00:13:19.406 Max Power: 25.00 W 00:13:19.406 Non-Operational State: Operational 00:13:19.406 Entry Latency: 16 microseconds 00:13:19.406 Exit Latency: 4 microseconds 00:13:19.406 Relative Read Throughput: 0 00:13:19.406 Relative Read Latency: 0 00:13:19.406 Relative Write Throughput: 0 00:13:19.406 Relative Write Latency: 0 00:13:19.406 Idle Power: Not Reported 00:13:19.406 Active Power: Not Reported 00:13:19.406 Non-Operational Permissive Mode: Not Supported 00:13:19.406 00:13:19.406 Health Information 00:13:19.406 ================== 00:13:19.406 Critical Warnings: 00:13:19.406 Available Spare Space: OK 00:13:19.406 Temperature: [2024-10-07 11:24:00.979284] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0] process 64823 terminated unexpected 00:13:19.406 OK 00:13:19.406 Device Reliability: OK 00:13:19.406 Read Only: No 00:13:19.406 Volatile Memory Backup: OK 00:13:19.406 Current Temperature: 323 Kelvin (50 Celsius) 00:13:19.406 Temperature Threshold: 343 Kelvin (70 Celsius) 00:13:19.406 Available Spare: 0% 00:13:19.406 Available Spare Threshold: 0% 00:13:19.406 Life Percentage Used: 0% 00:13:19.406 Data Units Read: 1130 00:13:19.406 Data Units Written: 998 00:13:19.406 Host Read Commands: 54373 00:13:19.406 Host Write Commands: 53164 00:13:19.406 Controller Busy Time: 0 minutes 00:13:19.406 Power Cycles: 0 00:13:19.406 Power On Hours: 0 hours 00:13:19.406 Unsafe Shutdowns: 0 00:13:19.406 Unrecoverable Media Errors: 0 00:13:19.406 Lifetime Error Log Entries: 0 00:13:19.406 Warning Temperature Time: 0 minutes 00:13:19.406 Critical Temperature Time: 0 minutes 00:13:19.406 00:13:19.406 Number of Queues 00:13:19.406 ================ 00:13:19.406 Number of I/O Submission Queues: 64 00:13:19.406 Number of I/O Completion Queues: 64 00:13:19.406 00:13:19.406 ZNS Specific Controller Data 00:13:19.406 ============================ 00:13:19.406 Zone Append Size Limit: 0 00:13:19.406 00:13:19.406 00:13:19.406 Active Namespaces 00:13:19.406 ================= 00:13:19.406 Namespace ID:1 00:13:19.406 Error Recovery Timeout: Unlimited 00:13:19.406 Command Set Identifier: NVM (00h) 00:13:19.406 Deallocate: Supported 00:13:19.406 Deallocated/Unwritten Error: Supported 00:13:19.406 Deallocated Read Value: All 0x00 00:13:19.406 Deallocate in Write Zeroes: Not Supported 00:13:19.406 Deallocated Guard Field: 0xFFFF 00:13:19.406 Flush: Supported 00:13:19.406 Reservation: Not Supported 00:13:19.406 Namespace Sharing Capabilities: Private 00:13:19.406 Size (in LBAs): 1310720 (5GiB) 00:13:19.406 Capacity (in LBAs): 1310720 (5GiB) 00:13:19.406 Utilization (in LBAs): 1310720 (5GiB) 00:13:19.406 Thin Provisioning: Not Supported 00:13:19.406 Per-NS Atomic Units: No 00:13:19.406 Maximum Single Source Range Length: 128 00:13:19.406 Maximum Copy Length: 128 00:13:19.406 Maximum Source Range Count: 128 00:13:19.406 NGUID/EUI64 Never Reused: No 00:13:19.406 Namespace Write Protected: No 00:13:19.406 Number of LBA Formats: 8 00:13:19.406 Current LBA Format: LBA Format #04 00:13:19.406 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:19.406 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:19.406 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:19.406 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:19.406 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:19.406 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:19.406 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:19.406 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:19.406 00:13:19.406 NVM Specific Namespace Data 00:13:19.406 =========================== 00:13:19.406 Logical Block Storage Tag Mask: 0 00:13:19.406 Protection Information Capabilities: 00:13:19.406 16b Guard Protection Information Storage Tag Support: No 00:13:19.406 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:19.406 Storage Tag Check Read Support: No 00:13:19.406 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:19.406 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:19.406 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:19.406 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:19.406 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:19.406 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:19.406 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:19.406 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:19.406 ===================================================== 00:13:19.406 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:19.406 ===================================================== 00:13:19.406 Controller Capabilities/Features 00:13:19.406 ================================ 00:13:19.407 Vendor ID: 1b36 00:13:19.407 Subsystem Vendor ID: 1af4 00:13:19.407 Serial Number: 12343 00:13:19.407 Model Number: QEMU NVMe Ctrl 00:13:19.407 Firmware Version: 8.0.0 00:13:19.407 Recommended Arb Burst: 6 00:13:19.407 IEEE OUI Identifier: 00 54 52 00:13:19.407 Multi-path I/O 00:13:19.407 May have multiple subsystem ports: No 00:13:19.407 May have multiple controllers: Yes 00:13:19.407 Associated with SR-IOV VF: No 00:13:19.407 Max Data Transfer Size: 524288 00:13:19.407 Max Number of Namespaces: 256 00:13:19.407 Max Number of I/O Queues: 64 00:13:19.407 NVMe Specification Version (VS): 1.4 00:13:19.407 NVMe Specification Version (Identify): 1.4 00:13:19.407 Maximum Queue Entries: 2048 00:13:19.407 Contiguous Queues Required: Yes 00:13:19.407 Arbitration Mechanisms Supported 00:13:19.407 Weighted Round Robin: Not Supported 00:13:19.407 Vendor Specific: Not Supported 00:13:19.407 Reset Timeout: 7500 ms 00:13:19.407 Doorbell Stride: 4 bytes 00:13:19.407 NVM Subsystem Reset: Not Supported 00:13:19.407 Command Sets Supported 00:13:19.407 NVM Command Set: Supported 00:13:19.407 Boot Partition: Not Supported 00:13:19.407 Memory Page Size Minimum: 4096 bytes 00:13:19.407 Memory Page Size Maximum: 65536 bytes 00:13:19.407 Persistent Memory Region: Not Supported 00:13:19.407 Optional Asynchronous Events Supported 00:13:19.407 Namespace Attribute Notices: Supported 00:13:19.407 Firmware Activation Notices: Not Supported 00:13:19.407 ANA Change Notices: Not Supported 00:13:19.407 PLE Aggregate Log Change Notices: Not Supported 00:13:19.407 LBA Status Info Alert Notices: Not Supported 00:13:19.407 EGE Aggregate Log Change Notices: Not Supported 00:13:19.407 Normal NVM Subsystem Shutdown event: Not Supported 00:13:19.407 Zone Descriptor Change Notices: Not Supported 00:13:19.407 Discovery Log Change Notices: Not Supported 00:13:19.407 Controller Attributes 00:13:19.407 128-bit Host Identifier: Not Supported 00:13:19.407 Non-Operational Permissive Mode: Not Supported 00:13:19.407 NVM Sets: Not Supported 00:13:19.407 Read Recovery Levels: Not Supported 00:13:19.407 Endurance Groups: Supported 00:13:19.407 Predictable Latency Mode: Not Supported 00:13:19.407 Traffic Based Keep ALive: Not Supported 00:13:19.407 Namespace Granularity: Not Supported 00:13:19.407 SQ Associations: Not Supported 00:13:19.407 UUID List: Not Supported 00:13:19.407 Multi-Domain Subsystem: Not Supported 00:13:19.407 Fixed Capacity Management: Not Supported 00:13:19.407 Variable Capacity Management: Not Supported 00:13:19.407 Delete Endurance Group: Not Supported 00:13:19.407 Delete NVM Set: Not Supported 00:13:19.407 Extended LBA Formats Supported: Supported 00:13:19.407 Flexible Data Placement Supported: Supported 00:13:19.407 00:13:19.407 Controller Memory Buffer Support 00:13:19.407 ================================ 00:13:19.407 Supported: No 00:13:19.407 00:13:19.407 Persistent Memory Region Support 00:13:19.407 ================================ 00:13:19.407 Supported: No 00:13:19.407 00:13:19.407 Admin Command Set Attributes 00:13:19.407 ============================ 00:13:19.407 Security Send/Receive: Not Supported 00:13:19.407 Format NVM: Supported 00:13:19.407 Firmware Activate/Download: Not Supported 00:13:19.407 Namespace Management: Supported 00:13:19.407 Device Self-Test: Not Supported 00:13:19.407 Directives: Supported 00:13:19.407 NVMe-MI: Not Supported 00:13:19.407 Virtualization Management: Not Supported 00:13:19.407 Doorbell Buffer Config: Supported 00:13:19.407 Get LBA Status Capability: Not Supported 00:13:19.407 Command & Feature Lockdown Capability: Not Supported 00:13:19.407 Abort Command Limit: 4 00:13:19.407 Async Event Request Limit: 4 00:13:19.407 Number of Firmware Slots: N/A 00:13:19.407 Firmware Slot 1 Read-Only: N/A 00:13:19.407 Firmware Activation Without Reset: N/A 00:13:19.407 Multiple Update Detection Support: N/A 00:13:19.407 Firmware Update Granularity: No Information Provided 00:13:19.407 Per-Namespace SMART Log: Yes 00:13:19.407 Asymmetric Namespace Access Log Page: Not Supported 00:13:19.407 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:13:19.407 Command Effects Log Page: Supported 00:13:19.407 Get Log Page Extended Data: Supported 00:13:19.407 Telemetry Log Pages: Not Supported 00:13:19.407 Persistent Event Log Pages: Not Supported 00:13:19.407 Supported Log Pages Log Page: May Support 00:13:19.407 Commands Supported & Effects Log Page: Not Supported 00:13:19.407 Feature Identifiers & Effects Log Page:May Support 00:13:19.407 NVMe-MI Commands & Effects Log Page: May Support 00:13:19.407 Data Area 4 for Telemetry Log: Not Supported 00:13:19.407 Error Log Page Entries Supported: 1 00:13:19.407 Keep Alive: Not Supported 00:13:19.407 00:13:19.407 NVM Command Set Attributes 00:13:19.407 ========================== 00:13:19.407 Submission Queue Entry Size 00:13:19.407 Max: 64 00:13:19.407 Min: 64 00:13:19.407 Completion Queue Entry Size 00:13:19.407 Max: 16 00:13:19.407 Min: 16 00:13:19.407 Number of Namespaces: 256 00:13:19.407 Compare Command: Supported 00:13:19.407 Write Uncorrectable Command: Not Supported 00:13:19.407 Dataset Management Command: Supported 00:13:19.407 Write Zeroes Command: Supported 00:13:19.407 Set Features Save Field: Supported 00:13:19.407 Reservations: Not Supported 00:13:19.407 Timestamp: Supported 00:13:19.407 Copy: Supported 00:13:19.407 Volatile Write Cache: Present 00:13:19.407 Atomic Write Unit (Normal): 1 00:13:19.407 Atomic Write Unit (PFail): 1 00:13:19.407 Atomic Compare & Write Unit: 1 00:13:19.407 Fused Compare & Write: Not Supported 00:13:19.407 Scatter-Gather List 00:13:19.407 SGL Command Set: Supported 00:13:19.407 SGL Keyed: Not Supported 00:13:19.407 SGL Bit Bucket Descriptor: Not Supported 00:13:19.407 SGL Metadata Pointer: Not Supported 00:13:19.407 Oversized SGL: Not Supported 00:13:19.407 SGL Metadata Address: Not Supported 00:13:19.407 SGL Offset: Not Supported 00:13:19.407 Transport SGL Data Block: Not Supported 00:13:19.407 Replay Protected Memory Block: Not Supported 00:13:19.407 00:13:19.407 Firmware Slot Information 00:13:19.407 ========================= 00:13:19.407 Active slot: 1 00:13:19.407 Slot 1 Firmware Revision: 1.0 00:13:19.407 00:13:19.407 00:13:19.407 Commands Supported and Effects 00:13:19.407 ============================== 00:13:19.407 Admin Commands 00:13:19.407 -------------- 00:13:19.407 Delete I/O Submission Queue (00h): Supported 00:13:19.407 Create I/O Submission Queue (01h): Supported 00:13:19.407 Get Log Page (02h): Supported 00:13:19.407 Delete I/O Completion Queue (04h): Supported 00:13:19.407 Create I/O Completion Queue (05h): Supported 00:13:19.407 Identify (06h): Supported 00:13:19.407 Abort (08h): Supported 00:13:19.407 Set Features (09h): Supported 00:13:19.407 Get Features (0Ah): Supported 00:13:19.407 Asynchronous Event Request (0Ch): Supported 00:13:19.407 Namespace Attachment (15h): Supported NS-Inventory-Change 00:13:19.407 Directive Send (19h): Supported 00:13:19.407 Directive Receive (1Ah): Supported 00:13:19.407 Virtualization Management (1Ch): Supported 00:13:19.407 Doorbell Buffer Config (7Ch): Supported 00:13:19.407 Format NVM (80h): Supported LBA-Change 00:13:19.407 I/O Commands 00:13:19.407 ------------ 00:13:19.407 Flush (00h): Supported LBA-Change 00:13:19.407 Write (01h): Supported LBA-Change 00:13:19.407 Read (02h): Supported 00:13:19.407 Compare (05h): Supported 00:13:19.407 Write Zeroes (08h): Supported LBA-Change 00:13:19.407 Dataset Management (09h): Supported LBA-Change 00:13:19.407 Unknown (0Ch): Supported 00:13:19.407 Unknown (12h): Supported 00:13:19.407 Copy (19h): Supported LBA-Change 00:13:19.407 Unknown (1Dh): Supported LBA-Change 00:13:19.407 00:13:19.407 Error Log 00:13:19.407 ========= 00:13:19.407 00:13:19.407 Arbitration 00:13:19.407 =========== 00:13:19.407 Arbitration Burst: no limit 00:13:19.407 00:13:19.407 Power Management 00:13:19.407 ================ 00:13:19.407 Number of Power States: 1 00:13:19.407 Current Power State: Power State #0 00:13:19.407 Power State #0: 00:13:19.407 Max Power: 25.00 W 00:13:19.407 Non-Operational State: Operational 00:13:19.407 Entry Latency: 16 microseconds 00:13:19.407 Exit Latency: 4 microseconds 00:13:19.407 Relative Read Throughput: 0 00:13:19.407 Relative Read Latency: 0 00:13:19.407 Relative Write Throughput: 0 00:13:19.407 Relative Write Latency: 0 00:13:19.407 Idle Power: Not Reported 00:13:19.407 Active Power: Not Reported 00:13:19.407 Non-Operational Permissive Mode: Not Supported 00:13:19.407 00:13:19.407 Health Information 00:13:19.407 ================== 00:13:19.407 Critical Warnings: 00:13:19.407 Available Spare Space: OK 00:13:19.407 Temperature: OK 00:13:19.407 Device Reliability: OK 00:13:19.407 Read Only: No 00:13:19.407 Volatile Memory Backup: OK 00:13:19.407 Current Temperature: 323 Kelvin (50 Celsius) 00:13:19.408 Temperature Threshold: 343 Kelvin (70 Celsius) 00:13:19.408 Available Spare: 0% 00:13:19.408 Available Spare Threshold: 0% 00:13:19.408 Life Percentage Used: 0% 00:13:19.408 Data Units Read: 871 00:13:19.408 Data Units Written: 800 00:13:19.408 Host Read Commands: 38264 00:13:19.408 Host Write Commands: 37687 00:13:19.408 Controller Busy Time: 0 minutes 00:13:19.408 Power Cycles: 0 00:13:19.408 Power On Hours: 0 hours 00:13:19.408 Unsafe Shutdowns: 0 00:13:19.408 Unrecoverable Media Errors: 0 00:13:19.408 Lifetime Error Log Entries: 0 00:13:19.408 Warning Temperature Time: 0 minutes 00:13:19.408 Critical Temperature Time: 0 minutes 00:13:19.408 00:13:19.408 Number of Queues 00:13:19.408 ================ 00:13:19.408 Number of I/O Submission Queues: 64 00:13:19.408 Number of I/O Completion Queues: 64 00:13:19.408 00:13:19.408 ZNS Specific Controller Data 00:13:19.408 ============================ 00:13:19.408 Zone Append Size Limit: 0 00:13:19.408 00:13:19.408 00:13:19.408 Active Namespaces 00:13:19.408 ================= 00:13:19.408 Namespace ID:1 00:13:19.408 Error Recovery Timeout: Unlimited 00:13:19.408 Command Set Identifier: NVM (00h) 00:13:19.408 Deallocate: Supported 00:13:19.408 Deallocated/Unwritten Error: Supported 00:13:19.408 Deallocated Read Value: All 0x00 00:13:19.408 Deallocate in Write Zeroes: Not Supported 00:13:19.408 Deallocated Guard Field: 0xFFFF 00:13:19.408 Flush: Supported 00:13:19.408 Reservation: Not Supported 00:13:19.408 Namespace Sharing Capabilities: Multiple Controllers 00:13:19.408 Size (in LBAs): 262144 (1GiB) 00:13:19.408 Capacity (in LBAs): 262144 (1GiB) 00:13:19.408 Utilization (in LBAs): 262144 (1GiB) 00:13:19.408 Thin Provisioning: Not Supported 00:13:19.408 Per-NS Atomic Units: No 00:13:19.408 Maximum Single Source Range Length: 128 00:13:19.408 Maximum Copy Length: 128 00:13:19.408 Maximum Source Range Count: 128 00:13:19.408 NGUID/EUI64 Never Reused: No 00:13:19.408 Namespace Write Protected: No 00:13:19.408 Endurance group ID: 1 00:13:19.408 Number of LBA Formats: 8 00:13:19.408 Current LBA Format: LBA Format #04 00:13:19.408 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:19.408 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:19.408 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:19.408 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:19.408 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:19.408 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:19.408 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:19.408 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:19.408 00:13:19.408 Get Feature FDP: 00:13:19.408 ================ 00:13:19.408 Enabled: Yes 00:13:19.408 FDP configuration index: 0 00:13:19.408 00:13:19.408 FDP configurations log page 00:13:19.408 =========================== 00:13:19.408 Number of FDP configurations: 1 00:13:19.408 Version: 0 00:13:19.408 Size: 112 00:13:19.408 FDP Configuration Descriptor: 0 00:13:19.408 Descriptor Size: 96 00:13:19.408 Reclaim Group Identifier format: 2 00:13:19.408 FDP Volatile Write Cache: Not Present 00:13:19.408 FDP Configuration: Valid 00:13:19.408 Vendor Specific Size: 0 00:13:19.408 Number of Reclaim Groups: 2 00:13:19.408 Number of Recalim Unit Handles: 8 00:13:19.408 Max Placement Identifiers: 128 00:13:19.408 Number of Namespaces Suppprted: 256 00:13:19.408 Reclaim unit Nominal Size: 6000000 bytes 00:13:19.408 Estimated Reclaim Unit Time Limit: Not Reported 00:13:19.408 RUH Desc #000: RUH Type: Initially Isolated 00:13:19.408 RUH Desc #001: RUH Type: Initially Isolated 00:13:19.408 RUH Desc #002: RUH Type: Initially Isolated 00:13:19.408 RUH Desc #003: RUH Type: Initially Isolated 00:13:19.408 RUH Desc #004: RUH Type: Initially Isolated 00:13:19.408 RUH Desc #005: RUH Type: Initially Isolated 00:13:19.408 RUH Desc #006: RUH Type: Initially Isolated 00:13:19.408 RUH Desc #007: RUH Type: Initially Isolated 00:13:19.408 00:13:19.408 FDP reclaim unit handle usage log page 00:13:19.408 ====================================== 00:13:19.408 Number of Reclaim Unit Handles: 8 00:13:19.408 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:13:19.408 RUH Usage Desc #001: RUH Attributes: Unused 00:13:19.408 RUH Usage Desc #002: RUH Attributes: Unused 00:13:19.408 RUH Usage Desc #003: RUH Attributes: Unused 00:13:19.408 RUH Usage Desc #004: RUH Attributes: Unused 00:13:19.408 RUH Usage Desc #005: RUH Attributes: Unused 00:13:19.408 RUH Usage Desc #006: RUH Attributes: Unused 00:13:19.408 RUH Usage Desc #007: RUH Attributes: Unused 00:13:19.408 00:13:19.408 FDP statistics log page 00:13:19.408 ======================= 00:13:19.408 Host bytes with metadata written: 512532480 00:13:19.408 Medi[2024-10-07 11:24:00.981284] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0] process 64823 terminated unexpected 00:13:19.408 a bytes with metadata written: 512589824 00:13:19.408 Media bytes erased: 0 00:13:19.408 00:13:19.408 FDP events log page 00:13:19.408 =================== 00:13:19.408 Number of FDP events: 0 00:13:19.408 00:13:19.408 NVM Specific Namespace Data 00:13:19.408 =========================== 00:13:19.408 Logical Block Storage Tag Mask: 0 00:13:19.408 Protection Information Capabilities: 00:13:19.408 16b Guard Protection Information Storage Tag Support: No 00:13:19.408 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:19.408 Storage Tag Check Read Support: No 00:13:19.408 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:19.408 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:19.408 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:19.408 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:19.408 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:19.408 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:19.408 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:19.408 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:19.408 ===================================================== 00:13:19.408 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:19.408 ===================================================== 00:13:19.408 Controller Capabilities/Features 00:13:19.408 ================================ 00:13:19.408 Vendor ID: 1b36 00:13:19.408 Subsystem Vendor ID: 1af4 00:13:19.408 Serial Number: 12342 00:13:19.408 Model Number: QEMU NVMe Ctrl 00:13:19.408 Firmware Version: 8.0.0 00:13:19.408 Recommended Arb Burst: 6 00:13:19.408 IEEE OUI Identifier: 00 54 52 00:13:19.408 Multi-path I/O 00:13:19.408 May have multiple subsystem ports: No 00:13:19.408 May have multiple controllers: No 00:13:19.408 Associated with SR-IOV VF: No 00:13:19.408 Max Data Transfer Size: 524288 00:13:19.408 Max Number of Namespaces: 256 00:13:19.408 Max Number of I/O Queues: 64 00:13:19.408 NVMe Specification Version (VS): 1.4 00:13:19.408 NVMe Specification Version (Identify): 1.4 00:13:19.408 Maximum Queue Entries: 2048 00:13:19.408 Contiguous Queues Required: Yes 00:13:19.408 Arbitration Mechanisms Supported 00:13:19.408 Weighted Round Robin: Not Supported 00:13:19.408 Vendor Specific: Not Supported 00:13:19.408 Reset Timeout: 7500 ms 00:13:19.408 Doorbell Stride: 4 bytes 00:13:19.408 NVM Subsystem Reset: Not Supported 00:13:19.408 Command Sets Supported 00:13:19.408 NVM Command Set: Supported 00:13:19.408 Boot Partition: Not Supported 00:13:19.408 Memory Page Size Minimum: 4096 bytes 00:13:19.408 Memory Page Size Maximum: 65536 bytes 00:13:19.408 Persistent Memory Region: Not Supported 00:13:19.408 Optional Asynchronous Events Supported 00:13:19.408 Namespace Attribute Notices: Supported 00:13:19.408 Firmware Activation Notices: Not Supported 00:13:19.409 ANA Change Notices: Not Supported 00:13:19.409 PLE Aggregate Log Change Notices: Not Supported 00:13:19.409 LBA Status Info Alert Notices: Not Supported 00:13:19.409 EGE Aggregate Log Change Notices: Not Supported 00:13:19.409 Normal NVM Subsystem Shutdown event: Not Supported 00:13:19.409 Zone Descriptor Change Notices: Not Supported 00:13:19.409 Discovery Log Change Notices: Not Supported 00:13:19.409 Controller Attributes 00:13:19.409 128-bit Host Identifier: Not Supported 00:13:19.409 Non-Operational Permissive Mode: Not Supported 00:13:19.409 NVM Sets: Not Supported 00:13:19.409 Read Recovery Levels: Not Supported 00:13:19.409 Endurance Groups: Not Supported 00:13:19.409 Predictable Latency Mode: Not Supported 00:13:19.409 Traffic Based Keep ALive: Not Supported 00:13:19.409 Namespace Granularity: Not Supported 00:13:19.409 SQ Associations: Not Supported 00:13:19.409 UUID List: Not Supported 00:13:19.409 Multi-Domain Subsystem: Not Supported 00:13:19.409 Fixed Capacity Management: Not Supported 00:13:19.409 Variable Capacity Management: Not Supported 00:13:19.409 Delete Endurance Group: Not Supported 00:13:19.409 Delete NVM Set: Not Supported 00:13:19.409 Extended LBA Formats Supported: Supported 00:13:19.409 Flexible Data Placement Supported: Not Supported 00:13:19.409 00:13:19.409 Controller Memory Buffer Support 00:13:19.409 ================================ 00:13:19.409 Supported: No 00:13:19.409 00:13:19.409 Persistent Memory Region Support 00:13:19.409 ================================ 00:13:19.409 Supported: No 00:13:19.409 00:13:19.409 Admin Command Set Attributes 00:13:19.409 ============================ 00:13:19.409 Security Send/Receive: Not Supported 00:13:19.409 Format NVM: Supported 00:13:19.409 Firmware Activate/Download: Not Supported 00:13:19.409 Namespace Management: Supported 00:13:19.409 Device Self-Test: Not Supported 00:13:19.409 Directives: Supported 00:13:19.409 NVMe-MI: Not Supported 00:13:19.409 Virtualization Management: Not Supported 00:13:19.409 Doorbell Buffer Config: Supported 00:13:19.409 Get LBA Status Capability: Not Supported 00:13:19.409 Command & Feature Lockdown Capability: Not Supported 00:13:19.409 Abort Command Limit: 4 00:13:19.409 Async Event Request Limit: 4 00:13:19.409 Number of Firmware Slots: N/A 00:13:19.409 Firmware Slot 1 Read-Only: N/A 00:13:19.409 Firmware Activation Without Reset: N/A 00:13:19.409 Multiple Update Detection Support: N/A 00:13:19.409 Firmware Update Granularity: No Information Provided 00:13:19.409 Per-Namespace SMART Log: Yes 00:13:19.409 Asymmetric Namespace Access Log Page: Not Supported 00:13:19.409 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:13:19.409 Command Effects Log Page: Supported 00:13:19.409 Get Log Page Extended Data: Supported 00:13:19.409 Telemetry Log Pages: Not Supported 00:13:19.409 Persistent Event Log Pages: Not Supported 00:13:19.409 Supported Log Pages Log Page: May Support 00:13:19.409 Commands Supported & Effects Log Page: Not Supported 00:13:19.409 Feature Identifiers & Effects Log Page:May Support 00:13:19.409 NVMe-MI Commands & Effects Log Page: May Support 00:13:19.409 Data Area 4 for Telemetry Log: Not Supported 00:13:19.409 Error Log Page Entries Supported: 1 00:13:19.409 Keep Alive: Not Supported 00:13:19.409 00:13:19.409 NVM Command Set Attributes 00:13:19.409 ========================== 00:13:19.409 Submission Queue Entry Size 00:13:19.409 Max: 64 00:13:19.409 Min: 64 00:13:19.409 Completion Queue Entry Size 00:13:19.409 Max: 16 00:13:19.409 Min: 16 00:13:19.409 Number of Namespaces: 256 00:13:19.409 Compare Command: Supported 00:13:19.409 Write Uncorrectable Command: Not Supported 00:13:19.409 Dataset Management Command: Supported 00:13:19.409 Write Zeroes Command: Supported 00:13:19.409 Set Features Save Field: Supported 00:13:19.409 Reservations: Not Supported 00:13:19.409 Timestamp: Supported 00:13:19.409 Copy: Supported 00:13:19.409 Volatile Write Cache: Present 00:13:19.409 Atomic Write Unit (Normal): 1 00:13:19.409 Atomic Write Unit (PFail): 1 00:13:19.409 Atomic Compare & Write Unit: 1 00:13:19.409 Fused Compare & Write: Not Supported 00:13:19.409 Scatter-Gather List 00:13:19.409 SGL Command Set: Supported 00:13:19.409 SGL Keyed: Not Supported 00:13:19.409 SGL Bit Bucket Descriptor: Not Supported 00:13:19.409 SGL Metadata Pointer: Not Supported 00:13:19.409 Oversized SGL: Not Supported 00:13:19.409 SGL Metadata Address: Not Supported 00:13:19.409 SGL Offset: Not Supported 00:13:19.409 Transport SGL Data Block: Not Supported 00:13:19.409 Replay Protected Memory Block: Not Supported 00:13:19.409 00:13:19.409 Firmware Slot Information 00:13:19.409 ========================= 00:13:19.409 Active slot: 1 00:13:19.409 Slot 1 Firmware Revision: 1.0 00:13:19.409 00:13:19.409 00:13:19.409 Commands Supported and Effects 00:13:19.409 ============================== 00:13:19.409 Admin Commands 00:13:19.409 -------------- 00:13:19.409 Delete I/O Submission Queue (00h): Supported 00:13:19.409 Create I/O Submission Queue (01h): Supported 00:13:19.409 Get Log Page (02h): Supported 00:13:19.409 Delete I/O Completion Queue (04h): Supported 00:13:19.409 Create I/O Completion Queue (05h): Supported 00:13:19.409 Identify (06h): Supported 00:13:19.409 Abort (08h): Supported 00:13:19.409 Set Features (09h): Supported 00:13:19.409 Get Features (0Ah): Supported 00:13:19.409 Asynchronous Event Request (0Ch): Supported 00:13:19.409 Namespace Attachment (15h): Supported NS-Inventory-Change 00:13:19.409 Directive Send (19h): Supported 00:13:19.409 Directive Receive (1Ah): Supported 00:13:19.409 Virtualization Management (1Ch): Supported 00:13:19.409 Doorbell Buffer Config (7Ch): Supported 00:13:19.409 Format NVM (80h): Supported LBA-Change 00:13:19.409 I/O Commands 00:13:19.409 ------------ 00:13:19.409 Flush (00h): Supported LBA-Change 00:13:19.409 Write (01h): Supported LBA-Change 00:13:19.409 Read (02h): Supported 00:13:19.409 Compare (05h): Supported 00:13:19.409 Write Zeroes (08h): Supported LBA-Change 00:13:19.409 Dataset Management (09h): Supported LBA-Change 00:13:19.409 Unknown (0Ch): Supported 00:13:19.409 Unknown (12h): Supported 00:13:19.409 Copy (19h): Supported LBA-Change 00:13:19.409 Unknown (1Dh): Supported LBA-Change 00:13:19.409 00:13:19.409 Error Log 00:13:19.409 ========= 00:13:19.409 00:13:19.409 Arbitration 00:13:19.409 =========== 00:13:19.409 Arbitration Burst: no limit 00:13:19.409 00:13:19.409 Power Management 00:13:19.409 ================ 00:13:19.409 Number of Power States: 1 00:13:19.409 Current Power State: Power State #0 00:13:19.409 Power State #0: 00:13:19.409 Max Power: 25.00 W 00:13:19.409 Non-Operational State: Operational 00:13:19.409 Entry Latency: 16 microseconds 00:13:19.409 Exit Latency: 4 microseconds 00:13:19.409 Relative Read Throughput: 0 00:13:19.409 Relative Read Latency: 0 00:13:19.409 Relative Write Throughput: 0 00:13:19.409 Relative Write Latency: 0 00:13:19.409 Idle Power: Not Reported 00:13:19.409 Active Power: Not Reported 00:13:19.409 Non-Operational Permissive Mode: Not Supported 00:13:19.409 00:13:19.409 Health Information 00:13:19.409 ================== 00:13:19.409 Critical Warnings: 00:13:19.409 Available Spare Space: OK 00:13:19.409 Temperature: OK 00:13:19.409 Device Reliability: OK 00:13:19.409 Read Only: No 00:13:19.409 Volatile Memory Backup: OK 00:13:19.409 Current Temperature: 323 Kelvin (50 Celsius) 00:13:19.409 Temperature Threshold: 343 Kelvin (70 Celsius) 00:13:19.409 Available Spare: 0% 00:13:19.409 Available Spare Threshold: 0% 00:13:19.409 Life Percentage Used: 0% 00:13:19.409 Data Units Read: 2371 00:13:19.409 Data Units Written: 2158 00:13:19.409 Host Read Commands: 112784 00:13:19.409 Host Write Commands: 111053 00:13:19.409 Controller Busy Time: 0 minutes 00:13:19.409 Power Cycles: 0 00:13:19.409 Power On Hours: 0 hours 00:13:19.409 Unsafe Shutdowns: 0 00:13:19.409 Unrecoverable Media Errors: 0 00:13:19.409 Lifetime Error Log Entries: 0 00:13:19.409 Warning Temperature Time: 0 minutes 00:13:19.409 Critical Temperature Time: 0 minutes 00:13:19.409 00:13:19.409 Number of Queues 00:13:19.409 ================ 00:13:19.409 Number of I/O Submission Queues: 64 00:13:19.409 Number of I/O Completion Queues: 64 00:13:19.409 00:13:19.409 ZNS Specific Controller Data 00:13:19.409 ============================ 00:13:19.409 Zone Append Size Limit: 0 00:13:19.409 00:13:19.409 00:13:19.409 Active Namespaces 00:13:19.409 ================= 00:13:19.409 Namespace ID:1 00:13:19.409 Error Recovery Timeout: Unlimited 00:13:19.409 Command Set Identifier: NVM (00h) 00:13:19.409 Deallocate: Supported 00:13:19.409 Deallocated/Unwritten Error: Supported 00:13:19.409 Deallocated Read Value: All 0x00 00:13:19.409 Deallocate in Write Zeroes: Not Supported 00:13:19.409 Deallocated Guard Field: 0xFFFF 00:13:19.409 Flush: Supported 00:13:19.409 Reservation: Not Supported 00:13:19.409 Namespace Sharing Capabilities: Private 00:13:19.409 Size (in LBAs): 1048576 (4GiB) 00:13:19.409 Capacity (in LBAs): 1048576 (4GiB) 00:13:19.409 Utilization (in LBAs): 1048576 (4GiB) 00:13:19.410 Thin Provisioning: Not Supported 00:13:19.410 Per-NS Atomic Units: No 00:13:19.410 Maximum Single Source Range Length: 128 00:13:19.410 Maximum Copy Length: 128 00:13:19.410 Maximum Source Range Count: 128 00:13:19.410 NGUID/EUI64 Never Reused: No 00:13:19.410 Namespace Write Protected: No 00:13:19.410 Number of LBA Formats: 8 00:13:19.410 Current LBA Format: LBA Format #04 00:13:19.410 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:19.410 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:19.410 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:19.410 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:19.410 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:19.410 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:19.410 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:19.410 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:19.410 00:13:19.410 NVM Specific Namespace Data 00:13:19.410 =========================== 00:13:19.410 Logical Block Storage Tag Mask: 0 00:13:19.410 Protection Information Capabilities: 00:13:19.410 16b Guard Protection Information Storage Tag Support: No 00:13:19.410 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:19.410 Storage Tag Check Read Support: No 00:13:19.410 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:19.410 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:19.410 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:19.410 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:19.410 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:19.410 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:19.410 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:19.410 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:19.410 Namespace ID:2 00:13:19.410 Error Recovery Timeout: Unlimited 00:13:19.410 Command Set Identifier: NVM (00h) 00:13:19.410 Deallocate: Supported 00:13:19.410 Deallocated/Unwritten Error: Supported 00:13:19.410 Deallocated Read Value: All 0x00 00:13:19.410 Deallocate in Write Zeroes: Not Supported 00:13:19.410 Deallocated Guard Field: 0xFFFF 00:13:19.410 Flush: Supported 00:13:19.410 Reservation: Not Supported 00:13:19.410 Namespace Sharing Capabilities: Private 00:13:19.410 Size (in LBAs): 1048576 (4GiB) 00:13:19.410 Capacity (in LBAs): 1048576 (4GiB) 00:13:19.410 Utilization (in LBAs): 1048576 (4GiB) 00:13:19.410 Thin Provisioning: Not Supported 00:13:19.410 Per-NS Atomic Units: No 00:13:19.410 Maximum Single Source Range Length: 128 00:13:19.410 Maximum Copy Length: 128 00:13:19.410 Maximum Source Range Count: 128 00:13:19.410 NGUID/EUI64 Never Reused: No 00:13:19.410 Namespace Write Protected: No 00:13:19.410 Number of LBA Formats: 8 00:13:19.410 Current LBA Format: LBA Format #04 00:13:19.410 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:19.410 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:19.410 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:19.410 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:19.410 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:19.410 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:19.410 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:19.410 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:19.410 00:13:19.410 NVM Specific Namespace Data 00:13:19.410 =========================== 00:13:19.410 Logical Block Storage Tag Mask: 0 00:13:19.410 Protection Information Capabilities: 00:13:19.410 16b Guard Protection Information Storage Tag Support: No 00:13:19.410 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:19.410 Storage Tag Check Read Support: No 00:13:19.410 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:19.410 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:19.410 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:19.410 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:19.410 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:19.410 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:19.410 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:19.410 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:19.410 Namespace ID:3 00:13:19.410 Error Recovery Timeout: Unlimited 00:13:19.410 Command Set Identifier: NVM (00h) 00:13:19.410 Deallocate: Supported 00:13:19.410 Deallocated/Unwritten Error: Supported 00:13:19.410 Deallocated Read Value: All 0x00 00:13:19.410 Deallocate in Write Zeroes: Not Supported 00:13:19.410 Deallocated Guard Field: 0xFFFF 00:13:19.410 Flush: Supported 00:13:19.410 Reservation: Not Supported 00:13:19.410 Namespace Sharing Capabilities: Private 00:13:19.410 Size (in LBAs): 1048576 (4GiB) 00:13:19.410 Capacity (in LBAs): 1048576 (4GiB) 00:13:19.410 Utilization (in LBAs): 1048576 (4GiB) 00:13:19.410 Thin Provisioning: Not Supported 00:13:19.410 Per-NS Atomic Units: No 00:13:19.410 Maximum Single Source Range Length: 128 00:13:19.410 Maximum Copy Length: 128 00:13:19.410 Maximum Source Range Count: 128 00:13:19.410 NGUID/EUI64 Never Reused: No 00:13:19.410 Namespace Write Protected: No 00:13:19.410 Number of LBA Formats: 8 00:13:19.410 Current LBA Format: LBA Format #04 00:13:19.410 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:19.410 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:19.410 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:19.410 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:19.410 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:19.410 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:19.410 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:19.410 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:19.410 00:13:19.410 NVM Specific Namespace Data 00:13:19.410 =========================== 00:13:19.410 Logical Block Storage Tag Mask: 0 00:13:19.410 Protection Information Capabilities: 00:13:19.410 16b Guard Protection Information Storage Tag Support: No 00:13:19.410 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:19.410 Storage Tag Check Read Support: No 00:13:19.410 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:19.410 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:19.410 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:19.410 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:19.410 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:19.410 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:19.410 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:19.410 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:19.410 11:24:01 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:13:19.410 11:24:01 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:13:19.669 ===================================================== 00:13:19.669 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:19.669 ===================================================== 00:13:19.669 Controller Capabilities/Features 00:13:19.669 ================================ 00:13:19.669 Vendor ID: 1b36 00:13:19.669 Subsystem Vendor ID: 1af4 00:13:19.669 Serial Number: 12340 00:13:19.669 Model Number: QEMU NVMe Ctrl 00:13:19.669 Firmware Version: 8.0.0 00:13:19.669 Recommended Arb Burst: 6 00:13:19.669 IEEE OUI Identifier: 00 54 52 00:13:19.669 Multi-path I/O 00:13:19.669 May have multiple subsystem ports: No 00:13:19.669 May have multiple controllers: No 00:13:19.669 Associated with SR-IOV VF: No 00:13:19.669 Max Data Transfer Size: 524288 00:13:19.669 Max Number of Namespaces: 256 00:13:19.669 Max Number of I/O Queues: 64 00:13:19.669 NVMe Specification Version (VS): 1.4 00:13:19.669 NVMe Specification Version (Identify): 1.4 00:13:19.669 Maximum Queue Entries: 2048 00:13:19.669 Contiguous Queues Required: Yes 00:13:19.669 Arbitration Mechanisms Supported 00:13:19.669 Weighted Round Robin: Not Supported 00:13:19.669 Vendor Specific: Not Supported 00:13:19.669 Reset Timeout: 7500 ms 00:13:19.669 Doorbell Stride: 4 bytes 00:13:19.669 NVM Subsystem Reset: Not Supported 00:13:19.669 Command Sets Supported 00:13:19.669 NVM Command Set: Supported 00:13:19.669 Boot Partition: Not Supported 00:13:19.669 Memory Page Size Minimum: 4096 bytes 00:13:19.669 Memory Page Size Maximum: 65536 bytes 00:13:19.669 Persistent Memory Region: Not Supported 00:13:19.669 Optional Asynchronous Events Supported 00:13:19.669 Namespace Attribute Notices: Supported 00:13:19.669 Firmware Activation Notices: Not Supported 00:13:19.669 ANA Change Notices: Not Supported 00:13:19.669 PLE Aggregate Log Change Notices: Not Supported 00:13:19.669 LBA Status Info Alert Notices: Not Supported 00:13:19.669 EGE Aggregate Log Change Notices: Not Supported 00:13:19.669 Normal NVM Subsystem Shutdown event: Not Supported 00:13:19.669 Zone Descriptor Change Notices: Not Supported 00:13:19.669 Discovery Log Change Notices: Not Supported 00:13:19.669 Controller Attributes 00:13:19.669 128-bit Host Identifier: Not Supported 00:13:19.669 Non-Operational Permissive Mode: Not Supported 00:13:19.669 NVM Sets: Not Supported 00:13:19.669 Read Recovery Levels: Not Supported 00:13:19.669 Endurance Groups: Not Supported 00:13:19.669 Predictable Latency Mode: Not Supported 00:13:19.669 Traffic Based Keep ALive: Not Supported 00:13:19.669 Namespace Granularity: Not Supported 00:13:19.669 SQ Associations: Not Supported 00:13:19.669 UUID List: Not Supported 00:13:19.669 Multi-Domain Subsystem: Not Supported 00:13:19.669 Fixed Capacity Management: Not Supported 00:13:19.669 Variable Capacity Management: Not Supported 00:13:19.669 Delete Endurance Group: Not Supported 00:13:19.669 Delete NVM Set: Not Supported 00:13:19.669 Extended LBA Formats Supported: Supported 00:13:19.669 Flexible Data Placement Supported: Not Supported 00:13:19.669 00:13:19.669 Controller Memory Buffer Support 00:13:19.669 ================================ 00:13:19.669 Supported: No 00:13:19.669 00:13:19.669 Persistent Memory Region Support 00:13:19.669 ================================ 00:13:19.669 Supported: No 00:13:19.669 00:13:19.669 Admin Command Set Attributes 00:13:19.669 ============================ 00:13:19.669 Security Send/Receive: Not Supported 00:13:19.669 Format NVM: Supported 00:13:19.669 Firmware Activate/Download: Not Supported 00:13:19.669 Namespace Management: Supported 00:13:19.669 Device Self-Test: Not Supported 00:13:19.669 Directives: Supported 00:13:19.669 NVMe-MI: Not Supported 00:13:19.669 Virtualization Management: Not Supported 00:13:19.669 Doorbell Buffer Config: Supported 00:13:19.669 Get LBA Status Capability: Not Supported 00:13:19.669 Command & Feature Lockdown Capability: Not Supported 00:13:19.669 Abort Command Limit: 4 00:13:19.669 Async Event Request Limit: 4 00:13:19.669 Number of Firmware Slots: N/A 00:13:19.669 Firmware Slot 1 Read-Only: N/A 00:13:19.669 Firmware Activation Without Reset: N/A 00:13:19.669 Multiple Update Detection Support: N/A 00:13:19.669 Firmware Update Granularity: No Information Provided 00:13:19.669 Per-Namespace SMART Log: Yes 00:13:19.669 Asymmetric Namespace Access Log Page: Not Supported 00:13:19.669 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:13:19.669 Command Effects Log Page: Supported 00:13:19.669 Get Log Page Extended Data: Supported 00:13:19.669 Telemetry Log Pages: Not Supported 00:13:19.669 Persistent Event Log Pages: Not Supported 00:13:19.669 Supported Log Pages Log Page: May Support 00:13:19.669 Commands Supported & Effects Log Page: Not Supported 00:13:19.669 Feature Identifiers & Effects Log Page:May Support 00:13:19.669 NVMe-MI Commands & Effects Log Page: May Support 00:13:19.669 Data Area 4 for Telemetry Log: Not Supported 00:13:19.669 Error Log Page Entries Supported: 1 00:13:19.669 Keep Alive: Not Supported 00:13:19.669 00:13:19.669 NVM Command Set Attributes 00:13:19.669 ========================== 00:13:19.669 Submission Queue Entry Size 00:13:19.669 Max: 64 00:13:19.669 Min: 64 00:13:19.669 Completion Queue Entry Size 00:13:19.669 Max: 16 00:13:19.669 Min: 16 00:13:19.669 Number of Namespaces: 256 00:13:19.669 Compare Command: Supported 00:13:19.669 Write Uncorrectable Command: Not Supported 00:13:19.669 Dataset Management Command: Supported 00:13:19.669 Write Zeroes Command: Supported 00:13:19.669 Set Features Save Field: Supported 00:13:19.669 Reservations: Not Supported 00:13:19.669 Timestamp: Supported 00:13:19.669 Copy: Supported 00:13:19.669 Volatile Write Cache: Present 00:13:19.669 Atomic Write Unit (Normal): 1 00:13:19.669 Atomic Write Unit (PFail): 1 00:13:19.669 Atomic Compare & Write Unit: 1 00:13:19.669 Fused Compare & Write: Not Supported 00:13:19.669 Scatter-Gather List 00:13:19.669 SGL Command Set: Supported 00:13:19.669 SGL Keyed: Not Supported 00:13:19.669 SGL Bit Bucket Descriptor: Not Supported 00:13:19.669 SGL Metadata Pointer: Not Supported 00:13:19.669 Oversized SGL: Not Supported 00:13:19.669 SGL Metadata Address: Not Supported 00:13:19.669 SGL Offset: Not Supported 00:13:19.669 Transport SGL Data Block: Not Supported 00:13:19.669 Replay Protected Memory Block: Not Supported 00:13:19.669 00:13:19.669 Firmware Slot Information 00:13:19.669 ========================= 00:13:19.669 Active slot: 1 00:13:19.669 Slot 1 Firmware Revision: 1.0 00:13:19.669 00:13:19.669 00:13:19.669 Commands Supported and Effects 00:13:19.669 ============================== 00:13:19.669 Admin Commands 00:13:19.669 -------------- 00:13:19.669 Delete I/O Submission Queue (00h): Supported 00:13:19.669 Create I/O Submission Queue (01h): Supported 00:13:19.670 Get Log Page (02h): Supported 00:13:19.670 Delete I/O Completion Queue (04h): Supported 00:13:19.670 Create I/O Completion Queue (05h): Supported 00:13:19.670 Identify (06h): Supported 00:13:19.670 Abort (08h): Supported 00:13:19.670 Set Features (09h): Supported 00:13:19.670 Get Features (0Ah): Supported 00:13:19.670 Asynchronous Event Request (0Ch): Supported 00:13:19.670 Namespace Attachment (15h): Supported NS-Inventory-Change 00:13:19.670 Directive Send (19h): Supported 00:13:19.670 Directive Receive (1Ah): Supported 00:13:19.670 Virtualization Management (1Ch): Supported 00:13:19.670 Doorbell Buffer Config (7Ch): Supported 00:13:19.670 Format NVM (80h): Supported LBA-Change 00:13:19.670 I/O Commands 00:13:19.670 ------------ 00:13:19.670 Flush (00h): Supported LBA-Change 00:13:19.670 Write (01h): Supported LBA-Change 00:13:19.670 Read (02h): Supported 00:13:19.670 Compare (05h): Supported 00:13:19.670 Write Zeroes (08h): Supported LBA-Change 00:13:19.670 Dataset Management (09h): Supported LBA-Change 00:13:19.670 Unknown (0Ch): Supported 00:13:19.670 Unknown (12h): Supported 00:13:19.670 Copy (19h): Supported LBA-Change 00:13:19.670 Unknown (1Dh): Supported LBA-Change 00:13:19.670 00:13:19.670 Error Log 00:13:19.670 ========= 00:13:19.670 00:13:19.670 Arbitration 00:13:19.670 =========== 00:13:19.670 Arbitration Burst: no limit 00:13:19.670 00:13:19.670 Power Management 00:13:19.670 ================ 00:13:19.670 Number of Power States: 1 00:13:19.670 Current Power State: Power State #0 00:13:19.670 Power State #0: 00:13:19.670 Max Power: 25.00 W 00:13:19.670 Non-Operational State: Operational 00:13:19.670 Entry Latency: 16 microseconds 00:13:19.670 Exit Latency: 4 microseconds 00:13:19.670 Relative Read Throughput: 0 00:13:19.670 Relative Read Latency: 0 00:13:19.670 Relative Write Throughput: 0 00:13:19.670 Relative Write Latency: 0 00:13:19.670 Idle Power: Not Reported 00:13:19.670 Active Power: Not Reported 00:13:19.670 Non-Operational Permissive Mode: Not Supported 00:13:19.670 00:13:19.670 Health Information 00:13:19.670 ================== 00:13:19.670 Critical Warnings: 00:13:19.670 Available Spare Space: OK 00:13:19.670 Temperature: OK 00:13:19.670 Device Reliability: OK 00:13:19.670 Read Only: No 00:13:19.670 Volatile Memory Backup: OK 00:13:19.670 Current Temperature: 323 Kelvin (50 Celsius) 00:13:19.670 Temperature Threshold: 343 Kelvin (70 Celsius) 00:13:19.670 Available Spare: 0% 00:13:19.670 Available Spare Threshold: 0% 00:13:19.670 Life Percentage Used: 0% 00:13:19.670 Data Units Read: 746 00:13:19.670 Data Units Written: 674 00:13:19.670 Host Read Commands: 36928 00:13:19.670 Host Write Commands: 36714 00:13:19.670 Controller Busy Time: 0 minutes 00:13:19.670 Power Cycles: 0 00:13:19.670 Power On Hours: 0 hours 00:13:19.670 Unsafe Shutdowns: 0 00:13:19.670 Unrecoverable Media Errors: 0 00:13:19.670 Lifetime Error Log Entries: 0 00:13:19.670 Warning Temperature Time: 0 minutes 00:13:19.670 Critical Temperature Time: 0 minutes 00:13:19.670 00:13:19.670 Number of Queues 00:13:19.670 ================ 00:13:19.670 Number of I/O Submission Queues: 64 00:13:19.670 Number of I/O Completion Queues: 64 00:13:19.670 00:13:19.670 ZNS Specific Controller Data 00:13:19.670 ============================ 00:13:19.670 Zone Append Size Limit: 0 00:13:19.670 00:13:19.670 00:13:19.670 Active Namespaces 00:13:19.670 ================= 00:13:19.670 Namespace ID:1 00:13:19.670 Error Recovery Timeout: Unlimited 00:13:19.670 Command Set Identifier: NVM (00h) 00:13:19.670 Deallocate: Supported 00:13:19.670 Deallocated/Unwritten Error: Supported 00:13:19.670 Deallocated Read Value: All 0x00 00:13:19.670 Deallocate in Write Zeroes: Not Supported 00:13:19.670 Deallocated Guard Field: 0xFFFF 00:13:19.670 Flush: Supported 00:13:19.670 Reservation: Not Supported 00:13:19.670 Metadata Transferred as: Separate Metadata Buffer 00:13:19.670 Namespace Sharing Capabilities: Private 00:13:19.670 Size (in LBAs): 1548666 (5GiB) 00:13:19.670 Capacity (in LBAs): 1548666 (5GiB) 00:13:19.670 Utilization (in LBAs): 1548666 (5GiB) 00:13:19.670 Thin Provisioning: Not Supported 00:13:19.670 Per-NS Atomic Units: No 00:13:19.670 Maximum Single Source Range Length: 128 00:13:19.670 Maximum Copy Length: 128 00:13:19.670 Maximum Source Range Count: 128 00:13:19.670 NGUID/EUI64 Never Reused: No 00:13:19.670 Namespace Write Protected: No 00:13:19.670 Number of LBA Formats: 8 00:13:19.670 Current LBA Format: LBA Format #07 00:13:19.670 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:19.670 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:19.670 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:19.670 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:19.670 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:19.670 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:19.670 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:19.670 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:19.670 00:13:19.670 NVM Specific Namespace Data 00:13:19.670 =========================== 00:13:19.670 Logical Block Storage Tag Mask: 0 00:13:19.670 Protection Information Capabilities: 00:13:19.670 16b Guard Protection Information Storage Tag Support: No 00:13:19.670 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:19.670 Storage Tag Check Read Support: No 00:13:19.670 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:19.670 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:19.670 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:19.670 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:19.670 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:19.670 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:19.670 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:19.670 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:19.670 11:24:01 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:13:19.670 11:24:01 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:13:20.237 ===================================================== 00:13:20.237 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:20.237 ===================================================== 00:13:20.237 Controller Capabilities/Features 00:13:20.237 ================================ 00:13:20.237 Vendor ID: 1b36 00:13:20.237 Subsystem Vendor ID: 1af4 00:13:20.237 Serial Number: 12341 00:13:20.237 Model Number: QEMU NVMe Ctrl 00:13:20.237 Firmware Version: 8.0.0 00:13:20.237 Recommended Arb Burst: 6 00:13:20.237 IEEE OUI Identifier: 00 54 52 00:13:20.237 Multi-path I/O 00:13:20.237 May have multiple subsystem ports: No 00:13:20.237 May have multiple controllers: No 00:13:20.237 Associated with SR-IOV VF: No 00:13:20.237 Max Data Transfer Size: 524288 00:13:20.237 Max Number of Namespaces: 256 00:13:20.237 Max Number of I/O Queues: 64 00:13:20.237 NVMe Specification Version (VS): 1.4 00:13:20.237 NVMe Specification Version (Identify): 1.4 00:13:20.237 Maximum Queue Entries: 2048 00:13:20.237 Contiguous Queues Required: Yes 00:13:20.237 Arbitration Mechanisms Supported 00:13:20.237 Weighted Round Robin: Not Supported 00:13:20.237 Vendor Specific: Not Supported 00:13:20.237 Reset Timeout: 7500 ms 00:13:20.237 Doorbell Stride: 4 bytes 00:13:20.237 NVM Subsystem Reset: Not Supported 00:13:20.237 Command Sets Supported 00:13:20.237 NVM Command Set: Supported 00:13:20.237 Boot Partition: Not Supported 00:13:20.237 Memory Page Size Minimum: 4096 bytes 00:13:20.237 Memory Page Size Maximum: 65536 bytes 00:13:20.237 Persistent Memory Region: Not Supported 00:13:20.237 Optional Asynchronous Events Supported 00:13:20.237 Namespace Attribute Notices: Supported 00:13:20.237 Firmware Activation Notices: Not Supported 00:13:20.237 ANA Change Notices: Not Supported 00:13:20.237 PLE Aggregate Log Change Notices: Not Supported 00:13:20.237 LBA Status Info Alert Notices: Not Supported 00:13:20.237 EGE Aggregate Log Change Notices: Not Supported 00:13:20.237 Normal NVM Subsystem Shutdown event: Not Supported 00:13:20.237 Zone Descriptor Change Notices: Not Supported 00:13:20.237 Discovery Log Change Notices: Not Supported 00:13:20.237 Controller Attributes 00:13:20.237 128-bit Host Identifier: Not Supported 00:13:20.237 Non-Operational Permissive Mode: Not Supported 00:13:20.237 NVM Sets: Not Supported 00:13:20.237 Read Recovery Levels: Not Supported 00:13:20.237 Endurance Groups: Not Supported 00:13:20.237 Predictable Latency Mode: Not Supported 00:13:20.237 Traffic Based Keep ALive: Not Supported 00:13:20.237 Namespace Granularity: Not Supported 00:13:20.237 SQ Associations: Not Supported 00:13:20.237 UUID List: Not Supported 00:13:20.237 Multi-Domain Subsystem: Not Supported 00:13:20.237 Fixed Capacity Management: Not Supported 00:13:20.237 Variable Capacity Management: Not Supported 00:13:20.237 Delete Endurance Group: Not Supported 00:13:20.237 Delete NVM Set: Not Supported 00:13:20.237 Extended LBA Formats Supported: Supported 00:13:20.237 Flexible Data Placement Supported: Not Supported 00:13:20.237 00:13:20.237 Controller Memory Buffer Support 00:13:20.237 ================================ 00:13:20.237 Supported: No 00:13:20.237 00:13:20.237 Persistent Memory Region Support 00:13:20.237 ================================ 00:13:20.237 Supported: No 00:13:20.237 00:13:20.237 Admin Command Set Attributes 00:13:20.237 ============================ 00:13:20.237 Security Send/Receive: Not Supported 00:13:20.237 Format NVM: Supported 00:13:20.237 Firmware Activate/Download: Not Supported 00:13:20.237 Namespace Management: Supported 00:13:20.237 Device Self-Test: Not Supported 00:13:20.237 Directives: Supported 00:13:20.237 NVMe-MI: Not Supported 00:13:20.237 Virtualization Management: Not Supported 00:13:20.237 Doorbell Buffer Config: Supported 00:13:20.237 Get LBA Status Capability: Not Supported 00:13:20.237 Command & Feature Lockdown Capability: Not Supported 00:13:20.237 Abort Command Limit: 4 00:13:20.237 Async Event Request Limit: 4 00:13:20.237 Number of Firmware Slots: N/A 00:13:20.237 Firmware Slot 1 Read-Only: N/A 00:13:20.237 Firmware Activation Without Reset: N/A 00:13:20.237 Multiple Update Detection Support: N/A 00:13:20.237 Firmware Update Granularity: No Information Provided 00:13:20.237 Per-Namespace SMART Log: Yes 00:13:20.237 Asymmetric Namespace Access Log Page: Not Supported 00:13:20.237 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:13:20.237 Command Effects Log Page: Supported 00:13:20.237 Get Log Page Extended Data: Supported 00:13:20.237 Telemetry Log Pages: Not Supported 00:13:20.237 Persistent Event Log Pages: Not Supported 00:13:20.237 Supported Log Pages Log Page: May Support 00:13:20.237 Commands Supported & Effects Log Page: Not Supported 00:13:20.237 Feature Identifiers & Effects Log Page:May Support 00:13:20.237 NVMe-MI Commands & Effects Log Page: May Support 00:13:20.237 Data Area 4 for Telemetry Log: Not Supported 00:13:20.237 Error Log Page Entries Supported: 1 00:13:20.237 Keep Alive: Not Supported 00:13:20.237 00:13:20.237 NVM Command Set Attributes 00:13:20.237 ========================== 00:13:20.237 Submission Queue Entry Size 00:13:20.237 Max: 64 00:13:20.237 Min: 64 00:13:20.237 Completion Queue Entry Size 00:13:20.237 Max: 16 00:13:20.237 Min: 16 00:13:20.237 Number of Namespaces: 256 00:13:20.237 Compare Command: Supported 00:13:20.237 Write Uncorrectable Command: Not Supported 00:13:20.237 Dataset Management Command: Supported 00:13:20.237 Write Zeroes Command: Supported 00:13:20.237 Set Features Save Field: Supported 00:13:20.237 Reservations: Not Supported 00:13:20.237 Timestamp: Supported 00:13:20.237 Copy: Supported 00:13:20.237 Volatile Write Cache: Present 00:13:20.237 Atomic Write Unit (Normal): 1 00:13:20.237 Atomic Write Unit (PFail): 1 00:13:20.237 Atomic Compare & Write Unit: 1 00:13:20.237 Fused Compare & Write: Not Supported 00:13:20.237 Scatter-Gather List 00:13:20.237 SGL Command Set: Supported 00:13:20.237 SGL Keyed: Not Supported 00:13:20.237 SGL Bit Bucket Descriptor: Not Supported 00:13:20.237 SGL Metadata Pointer: Not Supported 00:13:20.237 Oversized SGL: Not Supported 00:13:20.237 SGL Metadata Address: Not Supported 00:13:20.237 SGL Offset: Not Supported 00:13:20.237 Transport SGL Data Block: Not Supported 00:13:20.237 Replay Protected Memory Block: Not Supported 00:13:20.237 00:13:20.237 Firmware Slot Information 00:13:20.237 ========================= 00:13:20.237 Active slot: 1 00:13:20.237 Slot 1 Firmware Revision: 1.0 00:13:20.237 00:13:20.237 00:13:20.237 Commands Supported and Effects 00:13:20.237 ============================== 00:13:20.237 Admin Commands 00:13:20.237 -------------- 00:13:20.237 Delete I/O Submission Queue (00h): Supported 00:13:20.237 Create I/O Submission Queue (01h): Supported 00:13:20.237 Get Log Page (02h): Supported 00:13:20.237 Delete I/O Completion Queue (04h): Supported 00:13:20.237 Create I/O Completion Queue (05h): Supported 00:13:20.237 Identify (06h): Supported 00:13:20.237 Abort (08h): Supported 00:13:20.237 Set Features (09h): Supported 00:13:20.237 Get Features (0Ah): Supported 00:13:20.237 Asynchronous Event Request (0Ch): Supported 00:13:20.237 Namespace Attachment (15h): Supported NS-Inventory-Change 00:13:20.237 Directive Send (19h): Supported 00:13:20.237 Directive Receive (1Ah): Supported 00:13:20.237 Virtualization Management (1Ch): Supported 00:13:20.237 Doorbell Buffer Config (7Ch): Supported 00:13:20.237 Format NVM (80h): Supported LBA-Change 00:13:20.237 I/O Commands 00:13:20.237 ------------ 00:13:20.237 Flush (00h): Supported LBA-Change 00:13:20.237 Write (01h): Supported LBA-Change 00:13:20.238 Read (02h): Supported 00:13:20.238 Compare (05h): Supported 00:13:20.238 Write Zeroes (08h): Supported LBA-Change 00:13:20.238 Dataset Management (09h): Supported LBA-Change 00:13:20.238 Unknown (0Ch): Supported 00:13:20.238 Unknown (12h): Supported 00:13:20.238 Copy (19h): Supported LBA-Change 00:13:20.238 Unknown (1Dh): Supported LBA-Change 00:13:20.238 00:13:20.238 Error Log 00:13:20.238 ========= 00:13:20.238 00:13:20.238 Arbitration 00:13:20.238 =========== 00:13:20.238 Arbitration Burst: no limit 00:13:20.238 00:13:20.238 Power Management 00:13:20.238 ================ 00:13:20.238 Number of Power States: 1 00:13:20.238 Current Power State: Power State #0 00:13:20.238 Power State #0: 00:13:20.238 Max Power: 25.00 W 00:13:20.238 Non-Operational State: Operational 00:13:20.238 Entry Latency: 16 microseconds 00:13:20.238 Exit Latency: 4 microseconds 00:13:20.238 Relative Read Throughput: 0 00:13:20.238 Relative Read Latency: 0 00:13:20.238 Relative Write Throughput: 0 00:13:20.238 Relative Write Latency: 0 00:13:20.238 Idle Power: Not Reported 00:13:20.238 Active Power: Not Reported 00:13:20.238 Non-Operational Permissive Mode: Not Supported 00:13:20.238 00:13:20.238 Health Information 00:13:20.238 ================== 00:13:20.238 Critical Warnings: 00:13:20.238 Available Spare Space: OK 00:13:20.238 Temperature: OK 00:13:20.238 Device Reliability: OK 00:13:20.238 Read Only: No 00:13:20.238 Volatile Memory Backup: OK 00:13:20.238 Current Temperature: 323 Kelvin (50 Celsius) 00:13:20.238 Temperature Threshold: 343 Kelvin (70 Celsius) 00:13:20.238 Available Spare: 0% 00:13:20.238 Available Spare Threshold: 0% 00:13:20.238 Life Percentage Used: 0% 00:13:20.238 Data Units Read: 1130 00:13:20.238 Data Units Written: 998 00:13:20.238 Host Read Commands: 54373 00:13:20.238 Host Write Commands: 53164 00:13:20.238 Controller Busy Time: 0 minutes 00:13:20.238 Power Cycles: 0 00:13:20.238 Power On Hours: 0 hours 00:13:20.238 Unsafe Shutdowns: 0 00:13:20.238 Unrecoverable Media Errors: 0 00:13:20.238 Lifetime Error Log Entries: 0 00:13:20.238 Warning Temperature Time: 0 minutes 00:13:20.238 Critical Temperature Time: 0 minutes 00:13:20.238 00:13:20.238 Number of Queues 00:13:20.238 ================ 00:13:20.238 Number of I/O Submission Queues: 64 00:13:20.238 Number of I/O Completion Queues: 64 00:13:20.238 00:13:20.238 ZNS Specific Controller Data 00:13:20.238 ============================ 00:13:20.238 Zone Append Size Limit: 0 00:13:20.238 00:13:20.238 00:13:20.238 Active Namespaces 00:13:20.238 ================= 00:13:20.238 Namespace ID:1 00:13:20.238 Error Recovery Timeout: Unlimited 00:13:20.238 Command Set Identifier: NVM (00h) 00:13:20.238 Deallocate: Supported 00:13:20.238 Deallocated/Unwritten Error: Supported 00:13:20.238 Deallocated Read Value: All 0x00 00:13:20.238 Deallocate in Write Zeroes: Not Supported 00:13:20.238 Deallocated Guard Field: 0xFFFF 00:13:20.238 Flush: Supported 00:13:20.238 Reservation: Not Supported 00:13:20.238 Namespace Sharing Capabilities: Private 00:13:20.238 Size (in LBAs): 1310720 (5GiB) 00:13:20.238 Capacity (in LBAs): 1310720 (5GiB) 00:13:20.238 Utilization (in LBAs): 1310720 (5GiB) 00:13:20.238 Thin Provisioning: Not Supported 00:13:20.238 Per-NS Atomic Units: No 00:13:20.238 Maximum Single Source Range Length: 128 00:13:20.238 Maximum Copy Length: 128 00:13:20.238 Maximum Source Range Count: 128 00:13:20.238 NGUID/EUI64 Never Reused: No 00:13:20.238 Namespace Write Protected: No 00:13:20.238 Number of LBA Formats: 8 00:13:20.238 Current LBA Format: LBA Format #04 00:13:20.238 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:20.238 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:20.238 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:20.238 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:20.238 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:20.238 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:20.238 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:20.238 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:20.238 00:13:20.238 NVM Specific Namespace Data 00:13:20.238 =========================== 00:13:20.238 Logical Block Storage Tag Mask: 0 00:13:20.238 Protection Information Capabilities: 00:13:20.238 16b Guard Protection Information Storage Tag Support: No 00:13:20.238 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:20.238 Storage Tag Check Read Support: No 00:13:20.238 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:20.238 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:20.238 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:20.238 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:20.238 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:20.238 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:20.238 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:20.238 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:20.238 11:24:01 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:13:20.238 11:24:01 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:13:20.498 ===================================================== 00:13:20.498 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:20.498 ===================================================== 00:13:20.498 Controller Capabilities/Features 00:13:20.498 ================================ 00:13:20.498 Vendor ID: 1b36 00:13:20.498 Subsystem Vendor ID: 1af4 00:13:20.498 Serial Number: 12342 00:13:20.498 Model Number: QEMU NVMe Ctrl 00:13:20.498 Firmware Version: 8.0.0 00:13:20.498 Recommended Arb Burst: 6 00:13:20.498 IEEE OUI Identifier: 00 54 52 00:13:20.498 Multi-path I/O 00:13:20.498 May have multiple subsystem ports: No 00:13:20.498 May have multiple controllers: No 00:13:20.498 Associated with SR-IOV VF: No 00:13:20.498 Max Data Transfer Size: 524288 00:13:20.498 Max Number of Namespaces: 256 00:13:20.498 Max Number of I/O Queues: 64 00:13:20.498 NVMe Specification Version (VS): 1.4 00:13:20.498 NVMe Specification Version (Identify): 1.4 00:13:20.498 Maximum Queue Entries: 2048 00:13:20.498 Contiguous Queues Required: Yes 00:13:20.498 Arbitration Mechanisms Supported 00:13:20.498 Weighted Round Robin: Not Supported 00:13:20.498 Vendor Specific: Not Supported 00:13:20.498 Reset Timeout: 7500 ms 00:13:20.498 Doorbell Stride: 4 bytes 00:13:20.498 NVM Subsystem Reset: Not Supported 00:13:20.498 Command Sets Supported 00:13:20.498 NVM Command Set: Supported 00:13:20.498 Boot Partition: Not Supported 00:13:20.498 Memory Page Size Minimum: 4096 bytes 00:13:20.498 Memory Page Size Maximum: 65536 bytes 00:13:20.498 Persistent Memory Region: Not Supported 00:13:20.498 Optional Asynchronous Events Supported 00:13:20.498 Namespace Attribute Notices: Supported 00:13:20.498 Firmware Activation Notices: Not Supported 00:13:20.498 ANA Change Notices: Not Supported 00:13:20.498 PLE Aggregate Log Change Notices: Not Supported 00:13:20.498 LBA Status Info Alert Notices: Not Supported 00:13:20.498 EGE Aggregate Log Change Notices: Not Supported 00:13:20.498 Normal NVM Subsystem Shutdown event: Not Supported 00:13:20.498 Zone Descriptor Change Notices: Not Supported 00:13:20.498 Discovery Log Change Notices: Not Supported 00:13:20.498 Controller Attributes 00:13:20.498 128-bit Host Identifier: Not Supported 00:13:20.498 Non-Operational Permissive Mode: Not Supported 00:13:20.498 NVM Sets: Not Supported 00:13:20.498 Read Recovery Levels: Not Supported 00:13:20.498 Endurance Groups: Not Supported 00:13:20.498 Predictable Latency Mode: Not Supported 00:13:20.498 Traffic Based Keep ALive: Not Supported 00:13:20.498 Namespace Granularity: Not Supported 00:13:20.498 SQ Associations: Not Supported 00:13:20.498 UUID List: Not Supported 00:13:20.498 Multi-Domain Subsystem: Not Supported 00:13:20.498 Fixed Capacity Management: Not Supported 00:13:20.498 Variable Capacity Management: Not Supported 00:13:20.498 Delete Endurance Group: Not Supported 00:13:20.498 Delete NVM Set: Not Supported 00:13:20.498 Extended LBA Formats Supported: Supported 00:13:20.498 Flexible Data Placement Supported: Not Supported 00:13:20.498 00:13:20.499 Controller Memory Buffer Support 00:13:20.499 ================================ 00:13:20.499 Supported: No 00:13:20.499 00:13:20.499 Persistent Memory Region Support 00:13:20.499 ================================ 00:13:20.499 Supported: No 00:13:20.499 00:13:20.499 Admin Command Set Attributes 00:13:20.499 ============================ 00:13:20.499 Security Send/Receive: Not Supported 00:13:20.499 Format NVM: Supported 00:13:20.499 Firmware Activate/Download: Not Supported 00:13:20.499 Namespace Management: Supported 00:13:20.499 Device Self-Test: Not Supported 00:13:20.499 Directives: Supported 00:13:20.499 NVMe-MI: Not Supported 00:13:20.499 Virtualization Management: Not Supported 00:13:20.499 Doorbell Buffer Config: Supported 00:13:20.499 Get LBA Status Capability: Not Supported 00:13:20.499 Command & Feature Lockdown Capability: Not Supported 00:13:20.499 Abort Command Limit: 4 00:13:20.499 Async Event Request Limit: 4 00:13:20.499 Number of Firmware Slots: N/A 00:13:20.499 Firmware Slot 1 Read-Only: N/A 00:13:20.499 Firmware Activation Without Reset: N/A 00:13:20.499 Multiple Update Detection Support: N/A 00:13:20.499 Firmware Update Granularity: No Information Provided 00:13:20.499 Per-Namespace SMART Log: Yes 00:13:20.499 Asymmetric Namespace Access Log Page: Not Supported 00:13:20.499 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:13:20.499 Command Effects Log Page: Supported 00:13:20.499 Get Log Page Extended Data: Supported 00:13:20.499 Telemetry Log Pages: Not Supported 00:13:20.499 Persistent Event Log Pages: Not Supported 00:13:20.499 Supported Log Pages Log Page: May Support 00:13:20.499 Commands Supported & Effects Log Page: Not Supported 00:13:20.499 Feature Identifiers & Effects Log Page:May Support 00:13:20.499 NVMe-MI Commands & Effects Log Page: May Support 00:13:20.499 Data Area 4 for Telemetry Log: Not Supported 00:13:20.499 Error Log Page Entries Supported: 1 00:13:20.499 Keep Alive: Not Supported 00:13:20.499 00:13:20.499 NVM Command Set Attributes 00:13:20.499 ========================== 00:13:20.499 Submission Queue Entry Size 00:13:20.499 Max: 64 00:13:20.499 Min: 64 00:13:20.499 Completion Queue Entry Size 00:13:20.499 Max: 16 00:13:20.499 Min: 16 00:13:20.499 Number of Namespaces: 256 00:13:20.499 Compare Command: Supported 00:13:20.499 Write Uncorrectable Command: Not Supported 00:13:20.499 Dataset Management Command: Supported 00:13:20.499 Write Zeroes Command: Supported 00:13:20.499 Set Features Save Field: Supported 00:13:20.499 Reservations: Not Supported 00:13:20.499 Timestamp: Supported 00:13:20.499 Copy: Supported 00:13:20.499 Volatile Write Cache: Present 00:13:20.499 Atomic Write Unit (Normal): 1 00:13:20.499 Atomic Write Unit (PFail): 1 00:13:20.499 Atomic Compare & Write Unit: 1 00:13:20.499 Fused Compare & Write: Not Supported 00:13:20.499 Scatter-Gather List 00:13:20.499 SGL Command Set: Supported 00:13:20.499 SGL Keyed: Not Supported 00:13:20.499 SGL Bit Bucket Descriptor: Not Supported 00:13:20.499 SGL Metadata Pointer: Not Supported 00:13:20.499 Oversized SGL: Not Supported 00:13:20.499 SGL Metadata Address: Not Supported 00:13:20.499 SGL Offset: Not Supported 00:13:20.499 Transport SGL Data Block: Not Supported 00:13:20.499 Replay Protected Memory Block: Not Supported 00:13:20.499 00:13:20.499 Firmware Slot Information 00:13:20.499 ========================= 00:13:20.499 Active slot: 1 00:13:20.499 Slot 1 Firmware Revision: 1.0 00:13:20.499 00:13:20.499 00:13:20.499 Commands Supported and Effects 00:13:20.499 ============================== 00:13:20.499 Admin Commands 00:13:20.499 -------------- 00:13:20.499 Delete I/O Submission Queue (00h): Supported 00:13:20.499 Create I/O Submission Queue (01h): Supported 00:13:20.499 Get Log Page (02h): Supported 00:13:20.499 Delete I/O Completion Queue (04h): Supported 00:13:20.499 Create I/O Completion Queue (05h): Supported 00:13:20.499 Identify (06h): Supported 00:13:20.499 Abort (08h): Supported 00:13:20.499 Set Features (09h): Supported 00:13:20.499 Get Features (0Ah): Supported 00:13:20.499 Asynchronous Event Request (0Ch): Supported 00:13:20.499 Namespace Attachment (15h): Supported NS-Inventory-Change 00:13:20.499 Directive Send (19h): Supported 00:13:20.499 Directive Receive (1Ah): Supported 00:13:20.499 Virtualization Management (1Ch): Supported 00:13:20.499 Doorbell Buffer Config (7Ch): Supported 00:13:20.499 Format NVM (80h): Supported LBA-Change 00:13:20.499 I/O Commands 00:13:20.499 ------------ 00:13:20.499 Flush (00h): Supported LBA-Change 00:13:20.499 Write (01h): Supported LBA-Change 00:13:20.499 Read (02h): Supported 00:13:20.499 Compare (05h): Supported 00:13:20.499 Write Zeroes (08h): Supported LBA-Change 00:13:20.499 Dataset Management (09h): Supported LBA-Change 00:13:20.499 Unknown (0Ch): Supported 00:13:20.499 Unknown (12h): Supported 00:13:20.499 Copy (19h): Supported LBA-Change 00:13:20.499 Unknown (1Dh): Supported LBA-Change 00:13:20.499 00:13:20.499 Error Log 00:13:20.499 ========= 00:13:20.499 00:13:20.499 Arbitration 00:13:20.499 =========== 00:13:20.499 Arbitration Burst: no limit 00:13:20.499 00:13:20.499 Power Management 00:13:20.499 ================ 00:13:20.499 Number of Power States: 1 00:13:20.499 Current Power State: Power State #0 00:13:20.499 Power State #0: 00:13:20.499 Max Power: 25.00 W 00:13:20.499 Non-Operational State: Operational 00:13:20.499 Entry Latency: 16 microseconds 00:13:20.499 Exit Latency: 4 microseconds 00:13:20.499 Relative Read Throughput: 0 00:13:20.499 Relative Read Latency: 0 00:13:20.499 Relative Write Throughput: 0 00:13:20.499 Relative Write Latency: 0 00:13:20.499 Idle Power: Not Reported 00:13:20.499 Active Power: Not Reported 00:13:20.499 Non-Operational Permissive Mode: Not Supported 00:13:20.499 00:13:20.499 Health Information 00:13:20.499 ================== 00:13:20.499 Critical Warnings: 00:13:20.499 Available Spare Space: OK 00:13:20.499 Temperature: OK 00:13:20.499 Device Reliability: OK 00:13:20.499 Read Only: No 00:13:20.499 Volatile Memory Backup: OK 00:13:20.499 Current Temperature: 323 Kelvin (50 Celsius) 00:13:20.499 Temperature Threshold: 343 Kelvin (70 Celsius) 00:13:20.499 Available Spare: 0% 00:13:20.499 Available Spare Threshold: 0% 00:13:20.499 Life Percentage Used: 0% 00:13:20.499 Data Units Read: 2371 00:13:20.499 Data Units Written: 2158 00:13:20.499 Host Read Commands: 112784 00:13:20.499 Host Write Commands: 111053 00:13:20.499 Controller Busy Time: 0 minutes 00:13:20.499 Power Cycles: 0 00:13:20.499 Power On Hours: 0 hours 00:13:20.499 Unsafe Shutdowns: 0 00:13:20.499 Unrecoverable Media Errors: 0 00:13:20.499 Lifetime Error Log Entries: 0 00:13:20.499 Warning Temperature Time: 0 minutes 00:13:20.499 Critical Temperature Time: 0 minutes 00:13:20.499 00:13:20.499 Number of Queues 00:13:20.499 ================ 00:13:20.499 Number of I/O Submission Queues: 64 00:13:20.499 Number of I/O Completion Queues: 64 00:13:20.499 00:13:20.499 ZNS Specific Controller Data 00:13:20.499 ============================ 00:13:20.499 Zone Append Size Limit: 0 00:13:20.499 00:13:20.499 00:13:20.499 Active Namespaces 00:13:20.499 ================= 00:13:20.499 Namespace ID:1 00:13:20.499 Error Recovery Timeout: Unlimited 00:13:20.499 Command Set Identifier: NVM (00h) 00:13:20.499 Deallocate: Supported 00:13:20.499 Deallocated/Unwritten Error: Supported 00:13:20.499 Deallocated Read Value: All 0x00 00:13:20.499 Deallocate in Write Zeroes: Not Supported 00:13:20.499 Deallocated Guard Field: 0xFFFF 00:13:20.499 Flush: Supported 00:13:20.499 Reservation: Not Supported 00:13:20.499 Namespace Sharing Capabilities: Private 00:13:20.499 Size (in LBAs): 1048576 (4GiB) 00:13:20.499 Capacity (in LBAs): 1048576 (4GiB) 00:13:20.499 Utilization (in LBAs): 1048576 (4GiB) 00:13:20.499 Thin Provisioning: Not Supported 00:13:20.499 Per-NS Atomic Units: No 00:13:20.499 Maximum Single Source Range Length: 128 00:13:20.499 Maximum Copy Length: 128 00:13:20.499 Maximum Source Range Count: 128 00:13:20.499 NGUID/EUI64 Never Reused: No 00:13:20.499 Namespace Write Protected: No 00:13:20.499 Number of LBA Formats: 8 00:13:20.499 Current LBA Format: LBA Format #04 00:13:20.499 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:20.499 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:20.499 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:20.499 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:20.499 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:20.499 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:20.499 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:20.499 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:20.499 00:13:20.499 NVM Specific Namespace Data 00:13:20.499 =========================== 00:13:20.499 Logical Block Storage Tag Mask: 0 00:13:20.499 Protection Information Capabilities: 00:13:20.499 16b Guard Protection Information Storage Tag Support: No 00:13:20.499 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:20.499 Storage Tag Check Read Support: No 00:13:20.500 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:20.500 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:20.500 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:20.500 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:20.500 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:20.500 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:20.500 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:20.500 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:20.500 Namespace ID:2 00:13:20.500 Error Recovery Timeout: Unlimited 00:13:20.500 Command Set Identifier: NVM (00h) 00:13:20.500 Deallocate: Supported 00:13:20.500 Deallocated/Unwritten Error: Supported 00:13:20.500 Deallocated Read Value: All 0x00 00:13:20.500 Deallocate in Write Zeroes: Not Supported 00:13:20.500 Deallocated Guard Field: 0xFFFF 00:13:20.500 Flush: Supported 00:13:20.500 Reservation: Not Supported 00:13:20.500 Namespace Sharing Capabilities: Private 00:13:20.500 Size (in LBAs): 1048576 (4GiB) 00:13:20.500 Capacity (in LBAs): 1048576 (4GiB) 00:13:20.500 Utilization (in LBAs): 1048576 (4GiB) 00:13:20.500 Thin Provisioning: Not Supported 00:13:20.500 Per-NS Atomic Units: No 00:13:20.500 Maximum Single Source Range Length: 128 00:13:20.500 Maximum Copy Length: 128 00:13:20.500 Maximum Source Range Count: 128 00:13:20.500 NGUID/EUI64 Never Reused: No 00:13:20.500 Namespace Write Protected: No 00:13:20.500 Number of LBA Formats: 8 00:13:20.500 Current LBA Format: LBA Format #04 00:13:20.500 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:20.500 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:20.500 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:20.500 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:20.500 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:20.500 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:20.500 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:20.500 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:20.500 00:13:20.500 NVM Specific Namespace Data 00:13:20.500 =========================== 00:13:20.500 Logical Block Storage Tag Mask: 0 00:13:20.500 Protection Information Capabilities: 00:13:20.500 16b Guard Protection Information Storage Tag Support: No 00:13:20.500 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:20.500 Storage Tag Check Read Support: No 00:13:20.500 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:20.500 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:20.500 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:20.500 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:20.500 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:20.500 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:20.500 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:20.500 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:20.500 Namespace ID:3 00:13:20.500 Error Recovery Timeout: Unlimited 00:13:20.500 Command Set Identifier: NVM (00h) 00:13:20.500 Deallocate: Supported 00:13:20.500 Deallocated/Unwritten Error: Supported 00:13:20.500 Deallocated Read Value: All 0x00 00:13:20.500 Deallocate in Write Zeroes: Not Supported 00:13:20.500 Deallocated Guard Field: 0xFFFF 00:13:20.500 Flush: Supported 00:13:20.500 Reservation: Not Supported 00:13:20.500 Namespace Sharing Capabilities: Private 00:13:20.500 Size (in LBAs): 1048576 (4GiB) 00:13:20.500 Capacity (in LBAs): 1048576 (4GiB) 00:13:20.500 Utilization (in LBAs): 1048576 (4GiB) 00:13:20.500 Thin Provisioning: Not Supported 00:13:20.500 Per-NS Atomic Units: No 00:13:20.500 Maximum Single Source Range Length: 128 00:13:20.500 Maximum Copy Length: 128 00:13:20.500 Maximum Source Range Count: 128 00:13:20.500 NGUID/EUI64 Never Reused: No 00:13:20.500 Namespace Write Protected: No 00:13:20.500 Number of LBA Formats: 8 00:13:20.500 Current LBA Format: LBA Format #04 00:13:20.500 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:20.500 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:20.500 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:20.500 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:20.500 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:20.500 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:20.500 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:20.500 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:20.500 00:13:20.500 NVM Specific Namespace Data 00:13:20.500 =========================== 00:13:20.500 Logical Block Storage Tag Mask: 0 00:13:20.500 Protection Information Capabilities: 00:13:20.500 16b Guard Protection Information Storage Tag Support: No 00:13:20.500 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:20.500 Storage Tag Check Read Support: No 00:13:20.500 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:20.500 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:20.500 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:20.500 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:20.500 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:20.500 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:20.500 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:20.500 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:20.500 11:24:02 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:13:20.500 11:24:02 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:13:20.784 ===================================================== 00:13:20.784 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:20.784 ===================================================== 00:13:20.784 Controller Capabilities/Features 00:13:20.784 ================================ 00:13:20.784 Vendor ID: 1b36 00:13:20.784 Subsystem Vendor ID: 1af4 00:13:20.784 Serial Number: 12343 00:13:20.784 Model Number: QEMU NVMe Ctrl 00:13:20.784 Firmware Version: 8.0.0 00:13:20.784 Recommended Arb Burst: 6 00:13:20.784 IEEE OUI Identifier: 00 54 52 00:13:20.784 Multi-path I/O 00:13:20.784 May have multiple subsystem ports: No 00:13:20.784 May have multiple controllers: Yes 00:13:20.784 Associated with SR-IOV VF: No 00:13:20.784 Max Data Transfer Size: 524288 00:13:20.784 Max Number of Namespaces: 256 00:13:20.784 Max Number of I/O Queues: 64 00:13:20.784 NVMe Specification Version (VS): 1.4 00:13:20.784 NVMe Specification Version (Identify): 1.4 00:13:20.784 Maximum Queue Entries: 2048 00:13:20.784 Contiguous Queues Required: Yes 00:13:20.784 Arbitration Mechanisms Supported 00:13:20.784 Weighted Round Robin: Not Supported 00:13:20.784 Vendor Specific: Not Supported 00:13:20.784 Reset Timeout: 7500 ms 00:13:20.784 Doorbell Stride: 4 bytes 00:13:20.784 NVM Subsystem Reset: Not Supported 00:13:20.784 Command Sets Supported 00:13:20.784 NVM Command Set: Supported 00:13:20.784 Boot Partition: Not Supported 00:13:20.784 Memory Page Size Minimum: 4096 bytes 00:13:20.784 Memory Page Size Maximum: 65536 bytes 00:13:20.784 Persistent Memory Region: Not Supported 00:13:20.784 Optional Asynchronous Events Supported 00:13:20.784 Namespace Attribute Notices: Supported 00:13:20.784 Firmware Activation Notices: Not Supported 00:13:20.784 ANA Change Notices: Not Supported 00:13:20.784 PLE Aggregate Log Change Notices: Not Supported 00:13:20.784 LBA Status Info Alert Notices: Not Supported 00:13:20.784 EGE Aggregate Log Change Notices: Not Supported 00:13:20.784 Normal NVM Subsystem Shutdown event: Not Supported 00:13:20.784 Zone Descriptor Change Notices: Not Supported 00:13:20.784 Discovery Log Change Notices: Not Supported 00:13:20.784 Controller Attributes 00:13:20.784 128-bit Host Identifier: Not Supported 00:13:20.784 Non-Operational Permissive Mode: Not Supported 00:13:20.784 NVM Sets: Not Supported 00:13:20.784 Read Recovery Levels: Not Supported 00:13:20.784 Endurance Groups: Supported 00:13:20.784 Predictable Latency Mode: Not Supported 00:13:20.784 Traffic Based Keep ALive: Not Supported 00:13:20.784 Namespace Granularity: Not Supported 00:13:20.784 SQ Associations: Not Supported 00:13:20.784 UUID List: Not Supported 00:13:20.784 Multi-Domain Subsystem: Not Supported 00:13:20.784 Fixed Capacity Management: Not Supported 00:13:20.784 Variable Capacity Management: Not Supported 00:13:20.784 Delete Endurance Group: Not Supported 00:13:20.784 Delete NVM Set: Not Supported 00:13:20.784 Extended LBA Formats Supported: Supported 00:13:20.784 Flexible Data Placement Supported: Supported 00:13:20.784 00:13:20.784 Controller Memory Buffer Support 00:13:20.784 ================================ 00:13:20.784 Supported: No 00:13:20.784 00:13:20.784 Persistent Memory Region Support 00:13:20.784 ================================ 00:13:20.784 Supported: No 00:13:20.784 00:13:20.784 Admin Command Set Attributes 00:13:20.784 ============================ 00:13:20.784 Security Send/Receive: Not Supported 00:13:20.784 Format NVM: Supported 00:13:20.784 Firmware Activate/Download: Not Supported 00:13:20.784 Namespace Management: Supported 00:13:20.784 Device Self-Test: Not Supported 00:13:20.784 Directives: Supported 00:13:20.784 NVMe-MI: Not Supported 00:13:20.784 Virtualization Management: Not Supported 00:13:20.784 Doorbell Buffer Config: Supported 00:13:20.784 Get LBA Status Capability: Not Supported 00:13:20.784 Command & Feature Lockdown Capability: Not Supported 00:13:20.784 Abort Command Limit: 4 00:13:20.784 Async Event Request Limit: 4 00:13:20.784 Number of Firmware Slots: N/A 00:13:20.784 Firmware Slot 1 Read-Only: N/A 00:13:20.784 Firmware Activation Without Reset: N/A 00:13:20.784 Multiple Update Detection Support: N/A 00:13:20.784 Firmware Update Granularity: No Information Provided 00:13:20.784 Per-Namespace SMART Log: Yes 00:13:20.784 Asymmetric Namespace Access Log Page: Not Supported 00:13:20.784 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:13:20.784 Command Effects Log Page: Supported 00:13:20.784 Get Log Page Extended Data: Supported 00:13:20.784 Telemetry Log Pages: Not Supported 00:13:20.784 Persistent Event Log Pages: Not Supported 00:13:20.784 Supported Log Pages Log Page: May Support 00:13:20.784 Commands Supported & Effects Log Page: Not Supported 00:13:20.784 Feature Identifiers & Effects Log Page:May Support 00:13:20.784 NVMe-MI Commands & Effects Log Page: May Support 00:13:20.784 Data Area 4 for Telemetry Log: Not Supported 00:13:20.784 Error Log Page Entries Supported: 1 00:13:20.784 Keep Alive: Not Supported 00:13:20.784 00:13:20.784 NVM Command Set Attributes 00:13:20.784 ========================== 00:13:20.784 Submission Queue Entry Size 00:13:20.784 Max: 64 00:13:20.784 Min: 64 00:13:20.784 Completion Queue Entry Size 00:13:20.784 Max: 16 00:13:20.784 Min: 16 00:13:20.784 Number of Namespaces: 256 00:13:20.784 Compare Command: Supported 00:13:20.784 Write Uncorrectable Command: Not Supported 00:13:20.784 Dataset Management Command: Supported 00:13:20.784 Write Zeroes Command: Supported 00:13:20.784 Set Features Save Field: Supported 00:13:20.784 Reservations: Not Supported 00:13:20.784 Timestamp: Supported 00:13:20.784 Copy: Supported 00:13:20.784 Volatile Write Cache: Present 00:13:20.784 Atomic Write Unit (Normal): 1 00:13:20.784 Atomic Write Unit (PFail): 1 00:13:20.784 Atomic Compare & Write Unit: 1 00:13:20.784 Fused Compare & Write: Not Supported 00:13:20.784 Scatter-Gather List 00:13:20.784 SGL Command Set: Supported 00:13:20.784 SGL Keyed: Not Supported 00:13:20.784 SGL Bit Bucket Descriptor: Not Supported 00:13:20.784 SGL Metadata Pointer: Not Supported 00:13:20.784 Oversized SGL: Not Supported 00:13:20.784 SGL Metadata Address: Not Supported 00:13:20.784 SGL Offset: Not Supported 00:13:20.784 Transport SGL Data Block: Not Supported 00:13:20.784 Replay Protected Memory Block: Not Supported 00:13:20.784 00:13:20.784 Firmware Slot Information 00:13:20.784 ========================= 00:13:20.784 Active slot: 1 00:13:20.784 Slot 1 Firmware Revision: 1.0 00:13:20.784 00:13:20.784 00:13:20.784 Commands Supported and Effects 00:13:20.784 ============================== 00:13:20.784 Admin Commands 00:13:20.784 -------------- 00:13:20.784 Delete I/O Submission Queue (00h): Supported 00:13:20.784 Create I/O Submission Queue (01h): Supported 00:13:20.784 Get Log Page (02h): Supported 00:13:20.784 Delete I/O Completion Queue (04h): Supported 00:13:20.784 Create I/O Completion Queue (05h): Supported 00:13:20.784 Identify (06h): Supported 00:13:20.784 Abort (08h): Supported 00:13:20.784 Set Features (09h): Supported 00:13:20.784 Get Features (0Ah): Supported 00:13:20.784 Asynchronous Event Request (0Ch): Supported 00:13:20.784 Namespace Attachment (15h): Supported NS-Inventory-Change 00:13:20.784 Directive Send (19h): Supported 00:13:20.784 Directive Receive (1Ah): Supported 00:13:20.784 Virtualization Management (1Ch): Supported 00:13:20.784 Doorbell Buffer Config (7Ch): Supported 00:13:20.784 Format NVM (80h): Supported LBA-Change 00:13:20.784 I/O Commands 00:13:20.784 ------------ 00:13:20.784 Flush (00h): Supported LBA-Change 00:13:20.784 Write (01h): Supported LBA-Change 00:13:20.784 Read (02h): Supported 00:13:20.784 Compare (05h): Supported 00:13:20.784 Write Zeroes (08h): Supported LBA-Change 00:13:20.784 Dataset Management (09h): Supported LBA-Change 00:13:20.784 Unknown (0Ch): Supported 00:13:20.784 Unknown (12h): Supported 00:13:20.784 Copy (19h): Supported LBA-Change 00:13:20.784 Unknown (1Dh): Supported LBA-Change 00:13:20.784 00:13:20.784 Error Log 00:13:20.784 ========= 00:13:20.784 00:13:20.784 Arbitration 00:13:20.784 =========== 00:13:20.784 Arbitration Burst: no limit 00:13:20.784 00:13:20.784 Power Management 00:13:20.784 ================ 00:13:20.784 Number of Power States: 1 00:13:20.785 Current Power State: Power State #0 00:13:20.785 Power State #0: 00:13:20.785 Max Power: 25.00 W 00:13:20.785 Non-Operational State: Operational 00:13:20.785 Entry Latency: 16 microseconds 00:13:20.785 Exit Latency: 4 microseconds 00:13:20.785 Relative Read Throughput: 0 00:13:20.785 Relative Read Latency: 0 00:13:20.785 Relative Write Throughput: 0 00:13:20.785 Relative Write Latency: 0 00:13:20.785 Idle Power: Not Reported 00:13:20.785 Active Power: Not Reported 00:13:20.785 Non-Operational Permissive Mode: Not Supported 00:13:20.785 00:13:20.785 Health Information 00:13:20.785 ================== 00:13:20.785 Critical Warnings: 00:13:20.785 Available Spare Space: OK 00:13:20.785 Temperature: OK 00:13:20.785 Device Reliability: OK 00:13:20.785 Read Only: No 00:13:20.785 Volatile Memory Backup: OK 00:13:20.785 Current Temperature: 323 Kelvin (50 Celsius) 00:13:20.785 Temperature Threshold: 343 Kelvin (70 Celsius) 00:13:20.785 Available Spare: 0% 00:13:20.785 Available Spare Threshold: 0% 00:13:20.785 Life Percentage Used: 0% 00:13:20.785 Data Units Read: 871 00:13:20.785 Data Units Written: 800 00:13:20.785 Host Read Commands: 38264 00:13:20.785 Host Write Commands: 37687 00:13:20.785 Controller Busy Time: 0 minutes 00:13:20.785 Power Cycles: 0 00:13:20.785 Power On Hours: 0 hours 00:13:20.785 Unsafe Shutdowns: 0 00:13:20.785 Unrecoverable Media Errors: 0 00:13:20.785 Lifetime Error Log Entries: 0 00:13:20.785 Warning Temperature Time: 0 minutes 00:13:20.785 Critical Temperature Time: 0 minutes 00:13:20.785 00:13:20.785 Number of Queues 00:13:20.785 ================ 00:13:20.785 Number of I/O Submission Queues: 64 00:13:20.785 Number of I/O Completion Queues: 64 00:13:20.785 00:13:20.785 ZNS Specific Controller Data 00:13:20.785 ============================ 00:13:20.785 Zone Append Size Limit: 0 00:13:20.785 00:13:20.785 00:13:20.785 Active Namespaces 00:13:20.785 ================= 00:13:20.785 Namespace ID:1 00:13:20.785 Error Recovery Timeout: Unlimited 00:13:20.785 Command Set Identifier: NVM (00h) 00:13:20.785 Deallocate: Supported 00:13:20.785 Deallocated/Unwritten Error: Supported 00:13:20.785 Deallocated Read Value: All 0x00 00:13:20.785 Deallocate in Write Zeroes: Not Supported 00:13:20.785 Deallocated Guard Field: 0xFFFF 00:13:20.785 Flush: Supported 00:13:20.785 Reservation: Not Supported 00:13:20.785 Namespace Sharing Capabilities: Multiple Controllers 00:13:20.785 Size (in LBAs): 262144 (1GiB) 00:13:20.785 Capacity (in LBAs): 262144 (1GiB) 00:13:20.785 Utilization (in LBAs): 262144 (1GiB) 00:13:20.785 Thin Provisioning: Not Supported 00:13:20.785 Per-NS Atomic Units: No 00:13:20.785 Maximum Single Source Range Length: 128 00:13:20.785 Maximum Copy Length: 128 00:13:20.785 Maximum Source Range Count: 128 00:13:20.785 NGUID/EUI64 Never Reused: No 00:13:20.785 Namespace Write Protected: No 00:13:20.785 Endurance group ID: 1 00:13:20.785 Number of LBA Formats: 8 00:13:20.785 Current LBA Format: LBA Format #04 00:13:20.785 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:20.785 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:20.785 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:20.785 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:20.785 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:20.785 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:20.785 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:20.785 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:20.785 00:13:20.785 Get Feature FDP: 00:13:20.785 ================ 00:13:20.785 Enabled: Yes 00:13:20.785 FDP configuration index: 0 00:13:20.785 00:13:20.785 FDP configurations log page 00:13:20.785 =========================== 00:13:20.785 Number of FDP configurations: 1 00:13:20.785 Version: 0 00:13:20.785 Size: 112 00:13:20.785 FDP Configuration Descriptor: 0 00:13:20.785 Descriptor Size: 96 00:13:20.785 Reclaim Group Identifier format: 2 00:13:20.785 FDP Volatile Write Cache: Not Present 00:13:20.785 FDP Configuration: Valid 00:13:20.785 Vendor Specific Size: 0 00:13:20.785 Number of Reclaim Groups: 2 00:13:20.785 Number of Recalim Unit Handles: 8 00:13:20.785 Max Placement Identifiers: 128 00:13:20.785 Number of Namespaces Suppprted: 256 00:13:20.785 Reclaim unit Nominal Size: 6000000 bytes 00:13:20.785 Estimated Reclaim Unit Time Limit: Not Reported 00:13:20.785 RUH Desc #000: RUH Type: Initially Isolated 00:13:20.785 RUH Desc #001: RUH Type: Initially Isolated 00:13:20.785 RUH Desc #002: RUH Type: Initially Isolated 00:13:20.785 RUH Desc #003: RUH Type: Initially Isolated 00:13:20.785 RUH Desc #004: RUH Type: Initially Isolated 00:13:20.785 RUH Desc #005: RUH Type: Initially Isolated 00:13:20.785 RUH Desc #006: RUH Type: Initially Isolated 00:13:20.785 RUH Desc #007: RUH Type: Initially Isolated 00:13:20.785 00:13:20.785 FDP reclaim unit handle usage log page 00:13:20.785 ====================================== 00:13:20.785 Number of Reclaim Unit Handles: 8 00:13:20.785 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:13:20.785 RUH Usage Desc #001: RUH Attributes: Unused 00:13:20.785 RUH Usage Desc #002: RUH Attributes: Unused 00:13:20.785 RUH Usage Desc #003: RUH Attributes: Unused 00:13:20.785 RUH Usage Desc #004: RUH Attributes: Unused 00:13:20.785 RUH Usage Desc #005: RUH Attributes: Unused 00:13:20.785 RUH Usage Desc #006: RUH Attributes: Unused 00:13:20.785 RUH Usage Desc #007: RUH Attributes: Unused 00:13:20.785 00:13:20.785 FDP statistics log page 00:13:20.785 ======================= 00:13:20.785 Host bytes with metadata written: 512532480 00:13:20.785 Media bytes with metadata written: 512589824 00:13:20.785 Media bytes erased: 0 00:13:20.785 00:13:20.785 FDP events log page 00:13:20.785 =================== 00:13:20.785 Number of FDP events: 0 00:13:20.785 00:13:20.785 NVM Specific Namespace Data 00:13:20.785 =========================== 00:13:20.785 Logical Block Storage Tag Mask: 0 00:13:20.785 Protection Information Capabilities: 00:13:20.785 16b Guard Protection Information Storage Tag Support: No 00:13:20.785 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:20.785 Storage Tag Check Read Support: No 00:13:20.785 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:20.785 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:20.785 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:20.785 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:20.785 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:20.785 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:20.785 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:20.785 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:20.785 ************************************ 00:13:20.785 END TEST nvme_identify 00:13:20.785 ************************************ 00:13:20.785 00:13:20.785 real 0m1.785s 00:13:20.785 user 0m0.620s 00:13:20.785 sys 0m0.936s 00:13:20.785 11:24:02 nvme.nvme_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:20.785 11:24:02 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:13:20.785 11:24:02 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:13:20.785 11:24:02 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:20.785 11:24:02 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:20.785 11:24:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:20.785 ************************************ 00:13:20.785 START TEST nvme_perf 00:13:20.785 ************************************ 00:13:20.785 11:24:02 nvme.nvme_perf -- common/autotest_common.sh@1125 -- # nvme_perf 00:13:20.785 11:24:02 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:13:22.187 Initializing NVMe Controllers 00:13:22.187 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:22.187 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:22.187 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:22.187 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:22.187 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:13:22.187 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:13:22.187 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:13:22.187 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:13:22.187 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:13:22.187 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:13:22.187 Initialization complete. Launching workers. 00:13:22.187 ======================================================== 00:13:22.187 Latency(us) 00:13:22.187 Device Information : IOPS MiB/s Average min max 00:13:22.187 PCIE (0000:00:10.0) NSID 1 from core 0: 13722.51 160.81 9353.72 7638.39 50599.46 00:13:22.187 PCIE (0000:00:11.0) NSID 1 from core 0: 13722.51 160.81 9338.97 7710.24 48279.53 00:13:22.187 PCIE (0000:00:13.0) NSID 1 from core 0: 13722.51 160.81 9323.10 7730.40 46558.47 00:13:22.187 PCIE (0000:00:12.0) NSID 1 from core 0: 13722.51 160.81 9308.23 7725.38 44541.51 00:13:22.187 PCIE (0000:00:12.0) NSID 2 from core 0: 13722.51 160.81 9292.61 7680.60 42425.59 00:13:22.188 PCIE (0000:00:12.0) NSID 3 from core 0: 13786.34 161.56 9233.48 7696.75 35302.94 00:13:22.188 ======================================================== 00:13:22.188 Total : 82398.89 965.61 9308.29 7638.39 50599.46 00:13:22.188 00:13:22.188 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:13:22.188 ================================================================================= 00:13:22.188 1.00000% : 7790.625us 00:13:22.188 10.00000% : 8159.100us 00:13:22.188 25.00000% : 8422.297us 00:13:22.188 50.00000% : 8843.412us 00:13:22.188 75.00000% : 9317.166us 00:13:22.188 90.00000% : 10054.117us 00:13:22.188 95.00000% : 11159.544us 00:13:22.188 98.00000% : 13107.200us 00:13:22.188 99.00000% : 16318.201us 00:13:22.188 99.50000% : 43585.388us 00:13:22.188 99.90000% : 50323.226us 00:13:22.188 99.99000% : 50533.783us 00:13:22.188 99.99900% : 50744.341us 00:13:22.188 99.99990% : 50744.341us 00:13:22.188 99.99999% : 50744.341us 00:13:22.188 00:13:22.188 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:13:22.188 ================================================================================= 00:13:22.188 1.00000% : 7895.904us 00:13:22.188 10.00000% : 8211.740us 00:13:22.188 25.00000% : 8474.937us 00:13:22.188 50.00000% : 8790.773us 00:13:22.188 75.00000% : 9317.166us 00:13:22.188 90.00000% : 10054.117us 00:13:22.188 95.00000% : 11106.904us 00:13:22.188 98.00000% : 13317.757us 00:13:22.188 99.00000% : 17055.152us 00:13:22.188 99.50000% : 41479.814us 00:13:22.188 99.90000% : 48007.094us 00:13:22.188 99.99000% : 48428.209us 00:13:22.188 99.99900% : 48428.209us 00:13:22.188 99.99990% : 48428.209us 00:13:22.188 99.99999% : 48428.209us 00:13:22.188 00:13:22.188 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:13:22.188 ================================================================================= 00:13:22.188 1.00000% : 7895.904us 00:13:22.188 10.00000% : 8211.740us 00:13:22.188 25.00000% : 8474.937us 00:13:22.188 50.00000% : 8790.773us 00:13:22.188 75.00000% : 9317.166us 00:13:22.188 90.00000% : 10054.117us 00:13:22.188 95.00000% : 11001.626us 00:13:22.188 98.00000% : 13265.118us 00:13:22.188 99.00000% : 16844.594us 00:13:22.188 99.50000% : 40005.912us 00:13:22.188 99.90000% : 46322.635us 00:13:22.188 99.99000% : 46533.192us 00:13:22.188 99.99900% : 46743.749us 00:13:22.188 99.99990% : 46743.749us 00:13:22.188 99.99999% : 46743.749us 00:13:22.188 00:13:22.188 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:13:22.188 ================================================================================= 00:13:22.188 1.00000% : 7895.904us 00:13:22.188 10.00000% : 8211.740us 00:13:22.188 25.00000% : 8474.937us 00:13:22.188 50.00000% : 8790.773us 00:13:22.188 75.00000% : 9317.166us 00:13:22.188 90.00000% : 10054.117us 00:13:22.188 95.00000% : 11106.904us 00:13:22.188 98.00000% : 13580.954us 00:13:22.188 99.00000% : 16318.201us 00:13:22.188 99.50000% : 37900.337us 00:13:22.188 99.90000% : 44217.060us 00:13:22.188 99.99000% : 44638.175us 00:13:22.188 99.99900% : 44638.175us 00:13:22.188 99.99990% : 44638.175us 00:13:22.188 99.99999% : 44638.175us 00:13:22.188 00:13:22.188 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:13:22.188 ================================================================================= 00:13:22.188 1.00000% : 7895.904us 00:13:22.188 10.00000% : 8211.740us 00:13:22.188 25.00000% : 8474.937us 00:13:22.188 50.00000% : 8790.773us 00:13:22.188 75.00000% : 9317.166us 00:13:22.188 90.00000% : 10054.117us 00:13:22.188 95.00000% : 11264.822us 00:13:22.188 98.00000% : 13896.790us 00:13:22.188 99.00000% : 15897.086us 00:13:22.188 99.50000% : 36005.320us 00:13:22.188 99.90000% : 42111.486us 00:13:22.188 99.99000% : 42532.601us 00:13:22.188 99.99900% : 42532.601us 00:13:22.188 99.99990% : 42532.601us 00:13:22.188 99.99999% : 42532.601us 00:13:22.188 00:13:22.188 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:13:22.188 ================================================================================= 00:13:22.188 1.00000% : 7895.904us 00:13:22.188 10.00000% : 8211.740us 00:13:22.188 25.00000% : 8474.937us 00:13:22.188 50.00000% : 8790.773us 00:13:22.188 75.00000% : 9317.166us 00:13:22.188 90.00000% : 10054.117us 00:13:22.188 95.00000% : 11317.462us 00:13:22.188 98.00000% : 14002.069us 00:13:22.188 99.00000% : 15581.250us 00:13:22.188 99.50000% : 28635.810us 00:13:22.188 99.90000% : 34952.533us 00:13:22.188 99.99000% : 35373.648us 00:13:22.188 99.99900% : 35373.648us 00:13:22.188 99.99990% : 35373.648us 00:13:22.188 99.99999% : 35373.648us 00:13:22.188 00:13:22.188 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:13:22.188 ============================================================================== 00:13:22.188 Range in us Cumulative IO count 00:13:22.188 7632.707 - 7685.346: 0.1381% ( 19) 00:13:22.188 7685.346 - 7737.986: 0.5160% ( 52) 00:13:22.188 7737.986 - 7790.625: 1.0901% ( 79) 00:13:22.188 7790.625 - 7843.264: 1.7951% ( 97) 00:13:22.188 7843.264 - 7895.904: 2.6890% ( 123) 00:13:22.188 7895.904 - 7948.543: 3.9608% ( 175) 00:13:22.188 7948.543 - 8001.182: 5.5378% ( 217) 00:13:22.188 8001.182 - 8053.822: 7.3692% ( 252) 00:13:22.188 8053.822 - 8106.461: 9.4186% ( 282) 00:13:22.188 8106.461 - 8159.100: 11.6352% ( 305) 00:13:22.188 8159.100 - 8211.740: 13.9680% ( 321) 00:13:22.188 8211.740 - 8264.379: 16.6206% ( 365) 00:13:22.188 8264.379 - 8317.018: 19.4186% ( 385) 00:13:22.188 8317.018 - 8369.658: 22.4346% ( 415) 00:13:22.188 8369.658 - 8422.297: 25.4578% ( 416) 00:13:22.188 8422.297 - 8474.937: 28.6555% ( 440) 00:13:22.188 8474.937 - 8527.576: 32.0276% ( 464) 00:13:22.188 8527.576 - 8580.215: 35.5378% ( 483) 00:13:22.188 8580.215 - 8632.855: 39.2078% ( 505) 00:13:22.188 8632.855 - 8685.494: 42.7980% ( 494) 00:13:22.188 8685.494 - 8738.133: 46.2645% ( 477) 00:13:22.188 8738.133 - 8790.773: 49.7820% ( 484) 00:13:22.188 8790.773 - 8843.412: 53.1759% ( 467) 00:13:22.188 8843.412 - 8896.051: 56.3735% ( 440) 00:13:22.188 8896.051 - 8948.691: 59.4695% ( 426) 00:13:22.188 8948.691 - 9001.330: 62.3474% ( 396) 00:13:22.188 9001.330 - 9053.969: 64.8765% ( 348) 00:13:22.188 9053.969 - 9106.609: 67.3328% ( 338) 00:13:22.188 9106.609 - 9159.248: 69.5349% ( 303) 00:13:22.188 9159.248 - 9211.888: 71.5988% ( 284) 00:13:22.188 9211.888 - 9264.527: 73.5320% ( 266) 00:13:22.188 9264.527 - 9317.166: 75.3343% ( 248) 00:13:22.188 9317.166 - 9369.806: 77.0203% ( 232) 00:13:22.188 9369.806 - 9422.445: 78.6047% ( 218) 00:13:22.188 9422.445 - 9475.084: 80.0727% ( 202) 00:13:22.188 9475.084 - 9527.724: 81.3517% ( 176) 00:13:22.188 9527.724 - 9580.363: 82.5581% ( 166) 00:13:22.188 9580.363 - 9633.002: 83.7282% ( 161) 00:13:22.188 9633.002 - 9685.642: 84.7747% ( 144) 00:13:22.188 9685.642 - 9738.281: 85.7267% ( 131) 00:13:22.188 9738.281 - 9790.920: 86.6570% ( 128) 00:13:22.188 9790.920 - 9843.560: 87.4564% ( 110) 00:13:22.188 9843.560 - 9896.199: 88.1541% ( 96) 00:13:22.188 9896.199 - 9948.839: 88.8953% ( 102) 00:13:22.188 9948.839 - 10001.478: 89.5203% ( 86) 00:13:22.188 10001.478 - 10054.117: 90.0654% ( 75) 00:13:22.188 10054.117 - 10106.757: 90.5959% ( 73) 00:13:22.188 10106.757 - 10159.396: 90.9666% ( 51) 00:13:22.188 10159.396 - 10212.035: 91.3663% ( 55) 00:13:22.188 10212.035 - 10264.675: 91.7297% ( 50) 00:13:22.188 10264.675 - 10317.314: 92.0785% ( 48) 00:13:22.188 10317.314 - 10369.953: 92.3401% ( 36) 00:13:22.188 10369.953 - 10422.593: 92.6744% ( 46) 00:13:22.188 10422.593 - 10475.232: 92.9724% ( 41) 00:13:22.188 10475.232 - 10527.871: 93.2413% ( 37) 00:13:22.188 10527.871 - 10580.511: 93.4956% ( 35) 00:13:22.188 10580.511 - 10633.150: 93.7137% ( 30) 00:13:22.188 10633.150 - 10685.790: 93.8517% ( 19) 00:13:22.188 10685.790 - 10738.429: 94.0189% ( 23) 00:13:22.188 10738.429 - 10791.068: 94.1497% ( 18) 00:13:22.188 10791.068 - 10843.708: 94.3169% ( 23) 00:13:22.188 10843.708 - 10896.347: 94.4549% ( 19) 00:13:22.188 10896.347 - 10948.986: 94.5930% ( 19) 00:13:22.188 10948.986 - 11001.626: 94.6875% ( 13) 00:13:22.188 11001.626 - 11054.265: 94.8183% ( 18) 00:13:22.188 11054.265 - 11106.904: 94.9128% ( 13) 00:13:22.188 11106.904 - 11159.544: 95.0145% ( 14) 00:13:22.188 11159.544 - 11212.183: 95.0945% ( 11) 00:13:22.188 11212.183 - 11264.822: 95.2326% ( 19) 00:13:22.188 11264.822 - 11317.462: 95.3125% ( 11) 00:13:22.188 11317.462 - 11370.101: 95.4142% ( 14) 00:13:22.188 11370.101 - 11422.741: 95.5160% ( 14) 00:13:22.188 11422.741 - 11475.380: 95.6613% ( 20) 00:13:22.188 11475.380 - 11528.019: 95.7485% ( 12) 00:13:22.188 11528.019 - 11580.659: 95.8648% ( 16) 00:13:22.188 11580.659 - 11633.298: 95.9811% ( 16) 00:13:22.188 11633.298 - 11685.937: 96.0828% ( 14) 00:13:22.188 11685.937 - 11738.577: 96.1555% ( 10) 00:13:22.188 11738.577 - 11791.216: 96.2355% ( 11) 00:13:22.188 11791.216 - 11843.855: 96.3154% ( 11) 00:13:22.188 11843.855 - 11896.495: 96.3953% ( 11) 00:13:22.188 11896.495 - 11949.134: 96.4680% ( 10) 00:13:22.188 11949.134 - 12001.773: 96.5480% ( 11) 00:13:22.188 12001.773 - 12054.413: 96.6206% ( 10) 00:13:22.188 12054.413 - 12107.052: 96.6933% ( 10) 00:13:22.188 12107.052 - 12159.692: 96.7805% ( 12) 00:13:22.188 12159.692 - 12212.331: 96.8459% ( 9) 00:13:22.188 12212.331 - 12264.970: 96.9331% ( 12) 00:13:22.188 12264.970 - 12317.610: 97.0131% ( 11) 00:13:22.188 12317.610 - 12370.249: 97.0930% ( 11) 00:13:22.188 12370.249 - 12422.888: 97.1584% ( 9) 00:13:22.188 12422.888 - 12475.528: 97.2238% ( 9) 00:13:22.188 12475.528 - 12528.167: 97.3328% ( 15) 00:13:22.188 12528.167 - 12580.806: 97.3837% ( 7) 00:13:22.188 12580.806 - 12633.446: 97.4637% ( 11) 00:13:22.188 12633.446 - 12686.085: 97.5363% ( 10) 00:13:22.188 12686.085 - 12738.724: 97.6090% ( 10) 00:13:22.188 12738.724 - 12791.364: 97.6744% ( 9) 00:13:22.188 12791.364 - 12844.003: 97.7398% ( 9) 00:13:22.189 12844.003 - 12896.643: 97.7907% ( 7) 00:13:22.189 12896.643 - 12949.282: 97.8634% ( 10) 00:13:22.189 12949.282 - 13001.921: 97.9288% ( 9) 00:13:22.189 13001.921 - 13054.561: 97.9942% ( 9) 00:13:22.189 13054.561 - 13107.200: 98.0596% ( 9) 00:13:22.189 13107.200 - 13159.839: 98.1032% ( 6) 00:13:22.189 13159.839 - 13212.479: 98.1395% ( 5) 00:13:22.189 13212.479 - 13265.118: 98.1831% ( 6) 00:13:22.189 13265.118 - 13317.757: 98.1904% ( 1) 00:13:22.189 13317.757 - 13370.397: 98.2340% ( 6) 00:13:22.189 13370.397 - 13423.036: 98.2413% ( 1) 00:13:22.189 13423.036 - 13475.676: 98.2485% ( 1) 00:13:22.189 13475.676 - 13580.954: 98.2703% ( 3) 00:13:22.189 13580.954 - 13686.233: 98.2922% ( 3) 00:13:22.189 13686.233 - 13791.512: 98.3140% ( 3) 00:13:22.189 13791.512 - 13896.790: 98.3358% ( 3) 00:13:22.189 13896.790 - 14002.069: 98.3576% ( 3) 00:13:22.189 14002.069 - 14107.348: 98.3648% ( 1) 00:13:22.189 14107.348 - 14212.627: 98.4012% ( 5) 00:13:22.189 14212.627 - 14317.905: 98.4157% ( 2) 00:13:22.189 14317.905 - 14423.184: 98.4302% ( 2) 00:13:22.189 14423.184 - 14528.463: 98.4593% ( 4) 00:13:22.189 14528.463 - 14633.741: 98.4884% ( 4) 00:13:22.189 14633.741 - 14739.020: 98.5320% ( 6) 00:13:22.189 14739.020 - 14844.299: 98.5756% ( 6) 00:13:22.189 14844.299 - 14949.578: 98.6337% ( 8) 00:13:22.189 14949.578 - 15054.856: 98.6773% ( 6) 00:13:22.189 15054.856 - 15160.135: 98.7137% ( 5) 00:13:22.189 15160.135 - 15265.414: 98.7645% ( 7) 00:13:22.189 15265.414 - 15370.692: 98.7936% ( 4) 00:13:22.189 15370.692 - 15475.971: 98.8154% ( 3) 00:13:22.189 15475.971 - 15581.250: 98.8445% ( 4) 00:13:22.189 15581.250 - 15686.529: 98.8663% ( 3) 00:13:22.189 15686.529 - 15791.807: 98.8953% ( 4) 00:13:22.189 15791.807 - 15897.086: 98.9244% ( 4) 00:13:22.189 15897.086 - 16002.365: 98.9462% ( 3) 00:13:22.189 16002.365 - 16107.643: 98.9680% ( 3) 00:13:22.189 16107.643 - 16212.922: 98.9971% ( 4) 00:13:22.189 16212.922 - 16318.201: 99.0189% ( 3) 00:13:22.189 16318.201 - 16423.480: 99.0407% ( 3) 00:13:22.189 16423.480 - 16528.758: 99.0698% ( 4) 00:13:22.189 41479.814 - 41690.371: 99.0770% ( 1) 00:13:22.189 41690.371 - 41900.929: 99.1352% ( 8) 00:13:22.189 41900.929 - 42111.486: 99.1788% ( 6) 00:13:22.189 42111.486 - 42322.043: 99.2297% ( 7) 00:13:22.189 42322.043 - 42532.601: 99.2805% ( 7) 00:13:22.189 42532.601 - 42743.158: 99.3314% ( 7) 00:13:22.189 42743.158 - 42953.716: 99.3823% ( 7) 00:13:22.189 42953.716 - 43164.273: 99.4259% ( 6) 00:13:22.189 43164.273 - 43374.831: 99.4695% ( 6) 00:13:22.189 43374.831 - 43585.388: 99.5276% ( 8) 00:13:22.189 43585.388 - 43795.945: 99.5349% ( 1) 00:13:22.189 48428.209 - 48638.766: 99.5422% ( 1) 00:13:22.189 48638.766 - 48849.324: 99.5858% ( 6) 00:13:22.189 48849.324 - 49059.881: 99.6366% ( 7) 00:13:22.189 49059.881 - 49270.439: 99.6875% ( 7) 00:13:22.189 49270.439 - 49480.996: 99.7384% ( 7) 00:13:22.189 49480.996 - 49691.553: 99.7892% ( 7) 00:13:22.189 49691.553 - 49902.111: 99.8401% ( 7) 00:13:22.189 49902.111 - 50112.668: 99.8910% ( 7) 00:13:22.189 50112.668 - 50323.226: 99.9491% ( 8) 00:13:22.189 50323.226 - 50533.783: 99.9927% ( 6) 00:13:22.189 50533.783 - 50744.341: 100.0000% ( 1) 00:13:22.189 00:13:22.189 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:13:22.189 ============================================================================== 00:13:22.189 Range in us Cumulative IO count 00:13:22.189 7685.346 - 7737.986: 0.0945% ( 13) 00:13:22.189 7737.986 - 7790.625: 0.3198% ( 31) 00:13:22.189 7790.625 - 7843.264: 0.8576% ( 74) 00:13:22.189 7843.264 - 7895.904: 1.6715% ( 112) 00:13:22.189 7895.904 - 7948.543: 2.6163% ( 130) 00:13:22.189 7948.543 - 8001.182: 3.8881% ( 175) 00:13:22.189 8001.182 - 8053.822: 5.5233% ( 225) 00:13:22.189 8053.822 - 8106.461: 7.5727% ( 282) 00:13:22.189 8106.461 - 8159.100: 9.8692% ( 316) 00:13:22.189 8159.100 - 8211.740: 12.3110% ( 336) 00:13:22.189 8211.740 - 8264.379: 15.0000% ( 370) 00:13:22.189 8264.379 - 8317.018: 17.9506% ( 406) 00:13:22.189 8317.018 - 8369.658: 21.2500% ( 454) 00:13:22.189 8369.658 - 8422.297: 24.6512% ( 468) 00:13:22.189 8422.297 - 8474.937: 28.3648% ( 511) 00:13:22.189 8474.937 - 8527.576: 32.0276% ( 504) 00:13:22.189 8527.576 - 8580.215: 35.9157% ( 535) 00:13:22.189 8580.215 - 8632.855: 39.7602% ( 529) 00:13:22.189 8632.855 - 8685.494: 43.4666% ( 510) 00:13:22.189 8685.494 - 8738.133: 47.0349% ( 491) 00:13:22.189 8738.133 - 8790.773: 50.5814% ( 488) 00:13:22.189 8790.773 - 8843.412: 54.1279% ( 488) 00:13:22.189 8843.412 - 8896.051: 57.3765% ( 447) 00:13:22.189 8896.051 - 8948.691: 60.2834% ( 400) 00:13:22.189 8948.691 - 9001.330: 63.0596% ( 382) 00:13:22.189 9001.330 - 9053.969: 65.6613% ( 358) 00:13:22.189 9053.969 - 9106.609: 68.0596% ( 330) 00:13:22.189 9106.609 - 9159.248: 70.3198% ( 311) 00:13:22.189 9159.248 - 9211.888: 72.4709% ( 296) 00:13:22.189 9211.888 - 9264.527: 74.4259% ( 269) 00:13:22.189 9264.527 - 9317.166: 76.1773% ( 241) 00:13:22.189 9317.166 - 9369.806: 77.8706% ( 233) 00:13:22.189 9369.806 - 9422.445: 79.4186% ( 213) 00:13:22.189 9422.445 - 9475.084: 80.8866% ( 202) 00:13:22.189 9475.084 - 9527.724: 82.1948% ( 180) 00:13:22.189 9527.724 - 9580.363: 83.4084% ( 167) 00:13:22.189 9580.363 - 9633.002: 84.4113% ( 138) 00:13:22.189 9633.002 - 9685.642: 85.3198% ( 125) 00:13:22.189 9685.642 - 9738.281: 86.1773% ( 118) 00:13:22.189 9738.281 - 9790.920: 86.9840% ( 111) 00:13:22.189 9790.920 - 9843.560: 87.7616% ( 107) 00:13:22.189 9843.560 - 9896.199: 88.4666% ( 97) 00:13:22.189 9896.199 - 9948.839: 89.1206% ( 90) 00:13:22.189 9948.839 - 10001.478: 89.7602% ( 88) 00:13:22.189 10001.478 - 10054.117: 90.2544% ( 68) 00:13:22.189 10054.117 - 10106.757: 90.6686% ( 57) 00:13:22.189 10106.757 - 10159.396: 91.0756% ( 56) 00:13:22.189 10159.396 - 10212.035: 91.4244% ( 48) 00:13:22.189 10212.035 - 10264.675: 91.7587% ( 46) 00:13:22.189 10264.675 - 10317.314: 92.0640% ( 42) 00:13:22.189 10317.314 - 10369.953: 92.3619% ( 41) 00:13:22.189 10369.953 - 10422.593: 92.6744% ( 43) 00:13:22.189 10422.593 - 10475.232: 92.9869% ( 43) 00:13:22.189 10475.232 - 10527.871: 93.2558% ( 37) 00:13:22.189 10527.871 - 10580.511: 93.5174% ( 36) 00:13:22.189 10580.511 - 10633.150: 93.7282% ( 29) 00:13:22.189 10633.150 - 10685.790: 93.9462% ( 30) 00:13:22.189 10685.790 - 10738.429: 94.1206% ( 24) 00:13:22.189 10738.429 - 10791.068: 94.2951% ( 24) 00:13:22.189 10791.068 - 10843.708: 94.4331% ( 19) 00:13:22.189 10843.708 - 10896.347: 94.6076% ( 24) 00:13:22.189 10896.347 - 10948.986: 94.7602% ( 21) 00:13:22.189 10948.986 - 11001.626: 94.8765% ( 16) 00:13:22.189 11001.626 - 11054.265: 94.9927% ( 16) 00:13:22.189 11054.265 - 11106.904: 95.1235% ( 18) 00:13:22.189 11106.904 - 11159.544: 95.2253% ( 14) 00:13:22.189 11159.544 - 11212.183: 95.3052% ( 11) 00:13:22.189 11212.183 - 11264.822: 95.3852% ( 11) 00:13:22.189 11264.822 - 11317.462: 95.4651% ( 11) 00:13:22.189 11317.462 - 11370.101: 95.5378% ( 10) 00:13:22.189 11370.101 - 11422.741: 95.6177% ( 11) 00:13:22.189 11422.741 - 11475.380: 95.7049% ( 12) 00:13:22.189 11475.380 - 11528.019: 95.7703% ( 9) 00:13:22.189 11528.019 - 11580.659: 95.8358% ( 9) 00:13:22.189 11580.659 - 11633.298: 95.8866% ( 7) 00:13:22.189 11633.298 - 11685.937: 95.9520% ( 9) 00:13:22.189 11685.937 - 11738.577: 96.0102% ( 8) 00:13:22.189 11738.577 - 11791.216: 96.0683% ( 8) 00:13:22.189 11791.216 - 11843.855: 96.1265% ( 8) 00:13:22.189 11843.855 - 11896.495: 96.1846% ( 8) 00:13:22.189 11896.495 - 11949.134: 96.2427% ( 8) 00:13:22.189 11949.134 - 12001.773: 96.2936% ( 7) 00:13:22.189 12001.773 - 12054.413: 96.3517% ( 8) 00:13:22.189 12054.413 - 12107.052: 96.4026% ( 7) 00:13:22.189 12107.052 - 12159.692: 96.4898% ( 12) 00:13:22.189 12159.692 - 12212.331: 96.5843% ( 13) 00:13:22.189 12212.331 - 12264.970: 96.6570% ( 10) 00:13:22.189 12264.970 - 12317.610: 96.7369% ( 11) 00:13:22.189 12317.610 - 12370.249: 96.8241% ( 12) 00:13:22.189 12370.249 - 12422.888: 96.8750% ( 7) 00:13:22.189 12422.888 - 12475.528: 96.9404% ( 9) 00:13:22.189 12475.528 - 12528.167: 97.0058% ( 9) 00:13:22.189 12528.167 - 12580.806: 97.0930% ( 12) 00:13:22.189 12580.806 - 12633.446: 97.1584% ( 9) 00:13:22.189 12633.446 - 12686.085: 97.2384% ( 11) 00:13:22.189 12686.085 - 12738.724: 97.2965% ( 8) 00:13:22.189 12738.724 - 12791.364: 97.3692% ( 10) 00:13:22.189 12791.364 - 12844.003: 97.4419% ( 10) 00:13:22.189 12844.003 - 12896.643: 97.5073% ( 9) 00:13:22.189 12896.643 - 12949.282: 97.5799% ( 10) 00:13:22.189 12949.282 - 13001.921: 97.6672% ( 12) 00:13:22.189 13001.921 - 13054.561: 97.7326% ( 9) 00:13:22.189 13054.561 - 13107.200: 97.8052% ( 10) 00:13:22.189 13107.200 - 13159.839: 97.8488% ( 6) 00:13:22.189 13159.839 - 13212.479: 97.9142% ( 9) 00:13:22.189 13212.479 - 13265.118: 97.9797% ( 9) 00:13:22.189 13265.118 - 13317.757: 98.0378% ( 8) 00:13:22.189 13317.757 - 13370.397: 98.1105% ( 10) 00:13:22.189 13370.397 - 13423.036: 98.1686% ( 8) 00:13:22.189 13423.036 - 13475.676: 98.2049% ( 5) 00:13:22.189 13475.676 - 13580.954: 98.2922% ( 12) 00:13:22.189 13580.954 - 13686.233: 98.3721% ( 11) 00:13:22.189 13686.233 - 13791.512: 98.4230% ( 7) 00:13:22.189 13791.512 - 13896.790: 98.4520% ( 4) 00:13:22.189 13896.790 - 14002.069: 98.4738% ( 3) 00:13:22.189 14002.069 - 14107.348: 98.4884% ( 2) 00:13:22.189 14107.348 - 14212.627: 98.5174% ( 4) 00:13:22.189 14212.627 - 14317.905: 98.5392% ( 3) 00:13:22.189 14317.905 - 14423.184: 98.5610% ( 3) 00:13:22.189 14423.184 - 14528.463: 98.5828% ( 3) 00:13:22.189 14528.463 - 14633.741: 98.5974% ( 2) 00:13:22.189 14633.741 - 14739.020: 98.6047% ( 1) 00:13:22.189 15581.250 - 15686.529: 98.6410% ( 5) 00:13:22.189 15686.529 - 15791.807: 98.6628% ( 3) 00:13:22.189 15791.807 - 15897.086: 98.6846% ( 3) 00:13:22.189 15897.086 - 16002.365: 98.7064% ( 3) 00:13:22.189 16002.365 - 16107.643: 98.7355% ( 4) 00:13:22.189 16107.643 - 16212.922: 98.7718% ( 5) 00:13:22.189 16212.922 - 16318.201: 98.8154% ( 6) 00:13:22.189 16318.201 - 16423.480: 98.8445% ( 4) 00:13:22.190 16423.480 - 16528.758: 98.8663% ( 3) 00:13:22.190 16528.758 - 16634.037: 98.8953% ( 4) 00:13:22.190 16634.037 - 16739.316: 98.9172% ( 3) 00:13:22.190 16739.316 - 16844.594: 98.9462% ( 4) 00:13:22.190 16844.594 - 16949.873: 98.9753% ( 4) 00:13:22.190 16949.873 - 17055.152: 99.0044% ( 4) 00:13:22.190 17055.152 - 17160.431: 99.0334% ( 4) 00:13:22.190 17160.431 - 17265.709: 99.0698% ( 5) 00:13:22.190 39374.239 - 39584.797: 99.0843% ( 2) 00:13:22.190 39584.797 - 39795.354: 99.1424% ( 8) 00:13:22.190 39795.354 - 40005.912: 99.1933% ( 7) 00:13:22.190 40005.912 - 40216.469: 99.2442% ( 7) 00:13:22.190 40216.469 - 40427.027: 99.3023% ( 8) 00:13:22.190 40427.027 - 40637.584: 99.3459% ( 6) 00:13:22.190 40637.584 - 40848.141: 99.3968% ( 7) 00:13:22.190 40848.141 - 41058.699: 99.4549% ( 8) 00:13:22.190 41058.699 - 41269.256: 99.4985% ( 6) 00:13:22.190 41269.256 - 41479.814: 99.5349% ( 5) 00:13:22.190 46322.635 - 46533.192: 99.5567% ( 3) 00:13:22.190 46533.192 - 46743.749: 99.6148% ( 8) 00:13:22.190 46743.749 - 46954.307: 99.6584% ( 6) 00:13:22.190 46954.307 - 47164.864: 99.7166% ( 8) 00:13:22.190 47164.864 - 47375.422: 99.7602% ( 6) 00:13:22.190 47375.422 - 47585.979: 99.8110% ( 7) 00:13:22.190 47585.979 - 47796.537: 99.8692% ( 8) 00:13:22.190 47796.537 - 48007.094: 99.9273% ( 8) 00:13:22.190 48007.094 - 48217.651: 99.9782% ( 7) 00:13:22.190 48217.651 - 48428.209: 100.0000% ( 3) 00:13:22.190 00:13:22.190 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:13:22.190 ============================================================================== 00:13:22.190 Range in us Cumulative IO count 00:13:22.190 7685.346 - 7737.986: 0.0218% ( 3) 00:13:22.190 7737.986 - 7790.625: 0.3052% ( 39) 00:13:22.190 7790.625 - 7843.264: 0.8503% ( 75) 00:13:22.190 7843.264 - 7895.904: 1.6933% ( 116) 00:13:22.190 7895.904 - 7948.543: 2.6308% ( 129) 00:13:22.190 7948.543 - 8001.182: 3.9462% ( 181) 00:13:22.190 8001.182 - 8053.822: 5.5451% ( 220) 00:13:22.190 8053.822 - 8106.461: 7.5145% ( 271) 00:13:22.190 8106.461 - 8159.100: 9.8183% ( 317) 00:13:22.190 8159.100 - 8211.740: 12.2456% ( 334) 00:13:22.190 8211.740 - 8264.379: 14.8837% ( 363) 00:13:22.190 8264.379 - 8317.018: 17.8997% ( 415) 00:13:22.190 8317.018 - 8369.658: 21.0683% ( 436) 00:13:22.190 8369.658 - 8422.297: 24.4549% ( 466) 00:13:22.190 8422.297 - 8474.937: 28.0451% ( 494) 00:13:22.190 8474.937 - 8527.576: 31.8023% ( 517) 00:13:22.190 8527.576 - 8580.215: 35.6395% ( 528) 00:13:22.190 8580.215 - 8632.855: 39.4840% ( 529) 00:13:22.190 8632.855 - 8685.494: 43.1904% ( 510) 00:13:22.190 8685.494 - 8738.133: 46.7805% ( 494) 00:13:22.190 8738.133 - 8790.773: 50.3561% ( 492) 00:13:22.190 8790.773 - 8843.412: 53.7427% ( 466) 00:13:22.190 8843.412 - 8896.051: 56.9404% ( 440) 00:13:22.190 8896.051 - 8948.691: 59.8547% ( 401) 00:13:22.190 8948.691 - 9001.330: 62.5436% ( 370) 00:13:22.190 9001.330 - 9053.969: 65.1381% ( 357) 00:13:22.190 9053.969 - 9106.609: 67.6817% ( 350) 00:13:22.190 9106.609 - 9159.248: 69.9927% ( 318) 00:13:22.190 9159.248 - 9211.888: 72.1221% ( 293) 00:13:22.190 9211.888 - 9264.527: 74.1715% ( 282) 00:13:22.190 9264.527 - 9317.166: 75.9956% ( 251) 00:13:22.190 9317.166 - 9369.806: 77.6890% ( 233) 00:13:22.190 9369.806 - 9422.445: 79.1933% ( 207) 00:13:22.190 9422.445 - 9475.084: 80.6541% ( 201) 00:13:22.190 9475.084 - 9527.724: 81.9985% ( 185) 00:13:22.190 9527.724 - 9580.363: 83.2049% ( 166) 00:13:22.190 9580.363 - 9633.002: 84.3023% ( 151) 00:13:22.190 9633.002 - 9685.642: 85.3488% ( 144) 00:13:22.190 9685.642 - 9738.281: 86.2355% ( 122) 00:13:22.190 9738.281 - 9790.920: 87.0858% ( 117) 00:13:22.190 9790.920 - 9843.560: 87.8125% ( 100) 00:13:22.190 9843.560 - 9896.199: 88.5610% ( 103) 00:13:22.190 9896.199 - 9948.839: 89.2733% ( 98) 00:13:22.190 9948.839 - 10001.478: 89.8328% ( 77) 00:13:22.190 10001.478 - 10054.117: 90.3270% ( 68) 00:13:22.190 10054.117 - 10106.757: 90.7703% ( 61) 00:13:22.190 10106.757 - 10159.396: 91.1483% ( 52) 00:13:22.190 10159.396 - 10212.035: 91.5480% ( 55) 00:13:22.190 10212.035 - 10264.675: 91.9113% ( 50) 00:13:22.190 10264.675 - 10317.314: 92.2529% ( 47) 00:13:22.190 10317.314 - 10369.953: 92.5509% ( 41) 00:13:22.190 10369.953 - 10422.593: 92.8488% ( 41) 00:13:22.190 10422.593 - 10475.232: 93.1395% ( 40) 00:13:22.190 10475.232 - 10527.871: 93.3939% ( 35) 00:13:22.190 10527.871 - 10580.511: 93.6047% ( 29) 00:13:22.190 10580.511 - 10633.150: 93.8299% ( 31) 00:13:22.190 10633.150 - 10685.790: 94.0480% ( 30) 00:13:22.190 10685.790 - 10738.429: 94.2369% ( 26) 00:13:22.190 10738.429 - 10791.068: 94.4259% ( 26) 00:13:22.190 10791.068 - 10843.708: 94.6003% ( 24) 00:13:22.190 10843.708 - 10896.347: 94.7820% ( 25) 00:13:22.190 10896.347 - 10948.986: 94.9201% ( 19) 00:13:22.190 10948.986 - 11001.626: 95.0727% ( 21) 00:13:22.190 11001.626 - 11054.265: 95.1817% ( 15) 00:13:22.190 11054.265 - 11106.904: 95.2980% ( 16) 00:13:22.190 11106.904 - 11159.544: 95.4070% ( 15) 00:13:22.190 11159.544 - 11212.183: 95.5160% ( 15) 00:13:22.190 11212.183 - 11264.822: 95.5959% ( 11) 00:13:22.190 11264.822 - 11317.462: 95.6613% ( 9) 00:13:22.190 11317.462 - 11370.101: 95.7413% ( 11) 00:13:22.190 11370.101 - 11422.741: 95.8140% ( 10) 00:13:22.190 11422.741 - 11475.380: 95.8939% ( 11) 00:13:22.190 11475.380 - 11528.019: 95.9811% ( 12) 00:13:22.190 11528.019 - 11580.659: 96.0320% ( 7) 00:13:22.190 11580.659 - 11633.298: 96.0756% ( 6) 00:13:22.190 11633.298 - 11685.937: 96.1337% ( 8) 00:13:22.190 11685.937 - 11738.577: 96.1701% ( 5) 00:13:22.190 11738.577 - 11791.216: 96.2064% ( 5) 00:13:22.190 11791.216 - 11843.855: 96.2573% ( 7) 00:13:22.190 11843.855 - 11896.495: 96.2936% ( 5) 00:13:22.190 11896.495 - 11949.134: 96.3517% ( 8) 00:13:22.190 11949.134 - 12001.773: 96.4026% ( 7) 00:13:22.190 12001.773 - 12054.413: 96.4535% ( 7) 00:13:22.190 12054.413 - 12107.052: 96.5407% ( 12) 00:13:22.190 12107.052 - 12159.692: 96.6061% ( 9) 00:13:22.190 12159.692 - 12212.331: 96.6642% ( 8) 00:13:22.190 12212.331 - 12264.970: 96.7297% ( 9) 00:13:22.190 12264.970 - 12317.610: 96.7878% ( 8) 00:13:22.190 12317.610 - 12370.249: 96.8314% ( 6) 00:13:22.190 12370.249 - 12422.888: 96.8968% ( 9) 00:13:22.190 12422.888 - 12475.528: 96.9549% ( 8) 00:13:22.190 12475.528 - 12528.167: 97.0131% ( 8) 00:13:22.190 12528.167 - 12580.806: 97.0785% ( 9) 00:13:22.190 12580.806 - 12633.446: 97.1294% ( 7) 00:13:22.190 12633.446 - 12686.085: 97.2238% ( 13) 00:13:22.190 12686.085 - 12738.724: 97.2965% ( 10) 00:13:22.190 12738.724 - 12791.364: 97.3547% ( 8) 00:13:22.190 12791.364 - 12844.003: 97.4273% ( 10) 00:13:22.190 12844.003 - 12896.643: 97.5073% ( 11) 00:13:22.190 12896.643 - 12949.282: 97.5945% ( 12) 00:13:22.190 12949.282 - 13001.921: 97.6817% ( 12) 00:13:22.190 13001.921 - 13054.561: 97.7544% ( 10) 00:13:22.190 13054.561 - 13107.200: 97.8270% ( 10) 00:13:22.190 13107.200 - 13159.839: 97.8924% ( 9) 00:13:22.190 13159.839 - 13212.479: 97.9433% ( 7) 00:13:22.190 13212.479 - 13265.118: 98.0015% ( 8) 00:13:22.190 13265.118 - 13317.757: 98.0669% ( 9) 00:13:22.190 13317.757 - 13370.397: 98.1105% ( 6) 00:13:22.190 13370.397 - 13423.036: 98.1395% ( 4) 00:13:22.190 13423.036 - 13475.676: 98.1904% ( 7) 00:13:22.190 13475.676 - 13580.954: 98.2485% ( 8) 00:13:22.190 13580.954 - 13686.233: 98.3067% ( 8) 00:13:22.190 13686.233 - 13791.512: 98.3648% ( 8) 00:13:22.190 13791.512 - 13896.790: 98.4157% ( 7) 00:13:22.190 13896.790 - 14002.069: 98.4738% ( 8) 00:13:22.190 14002.069 - 14107.348: 98.5029% ( 4) 00:13:22.190 14107.348 - 14212.627: 98.5320% ( 4) 00:13:22.190 14212.627 - 14317.905: 98.5538% ( 3) 00:13:22.190 14317.905 - 14423.184: 98.5756% ( 3) 00:13:22.190 14423.184 - 14528.463: 98.5974% ( 3) 00:13:22.190 14528.463 - 14633.741: 98.6047% ( 1) 00:13:22.190 15054.856 - 15160.135: 98.6337% ( 4) 00:13:22.190 15160.135 - 15265.414: 98.6483% ( 2) 00:13:22.190 15265.414 - 15370.692: 98.6701% ( 3) 00:13:22.190 15370.692 - 15475.971: 98.6919% ( 3) 00:13:22.190 15475.971 - 15581.250: 98.7209% ( 4) 00:13:22.190 15581.250 - 15686.529: 98.7427% ( 3) 00:13:22.190 15686.529 - 15791.807: 98.7718% ( 4) 00:13:22.190 15791.807 - 15897.086: 98.7936% ( 3) 00:13:22.190 15897.086 - 16002.365: 98.8227% ( 4) 00:13:22.190 16002.365 - 16107.643: 98.8517% ( 4) 00:13:22.190 16107.643 - 16212.922: 98.8735% ( 3) 00:13:22.190 16212.922 - 16318.201: 98.9026% ( 4) 00:13:22.190 16318.201 - 16423.480: 98.9244% ( 3) 00:13:22.190 16423.480 - 16528.758: 98.9535% ( 4) 00:13:22.190 16528.758 - 16634.037: 98.9826% ( 4) 00:13:22.190 16634.037 - 16739.316: 98.9971% ( 2) 00:13:22.190 16739.316 - 16844.594: 99.0189% ( 3) 00:13:22.190 16844.594 - 16949.873: 99.0334% ( 2) 00:13:22.190 16949.873 - 17055.152: 99.0552% ( 3) 00:13:22.190 17055.152 - 17160.431: 99.0698% ( 2) 00:13:22.190 38110.895 - 38321.452: 99.1061% ( 5) 00:13:22.190 38321.452 - 38532.010: 99.1642% ( 8) 00:13:22.190 38532.010 - 38742.567: 99.2151% ( 7) 00:13:22.190 38742.567 - 38953.124: 99.2733% ( 8) 00:13:22.190 38953.124 - 39163.682: 99.3314% ( 8) 00:13:22.190 39163.682 - 39374.239: 99.3823% ( 7) 00:13:22.190 39374.239 - 39584.797: 99.4404% ( 8) 00:13:22.190 39584.797 - 39795.354: 99.4913% ( 7) 00:13:22.190 39795.354 - 40005.912: 99.5349% ( 6) 00:13:22.190 44638.175 - 44848.733: 99.5567% ( 3) 00:13:22.190 44848.733 - 45059.290: 99.6148% ( 8) 00:13:22.190 45059.290 - 45269.847: 99.6657% ( 7) 00:13:22.190 45269.847 - 45480.405: 99.7166% ( 7) 00:13:22.190 45480.405 - 45690.962: 99.7674% ( 7) 00:13:22.190 45690.962 - 45901.520: 99.8256% ( 8) 00:13:22.190 45901.520 - 46112.077: 99.8765% ( 7) 00:13:22.190 46112.077 - 46322.635: 99.9346% ( 8) 00:13:22.190 46322.635 - 46533.192: 99.9927% ( 8) 00:13:22.190 46533.192 - 46743.749: 100.0000% ( 1) 00:13:22.190 00:13:22.190 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:13:22.190 ============================================================================== 00:13:22.190 Range in us Cumulative IO count 00:13:22.190 7685.346 - 7737.986: 0.0218% ( 3) 00:13:22.191 7737.986 - 7790.625: 0.2253% ( 28) 00:13:22.191 7790.625 - 7843.264: 0.7922% ( 78) 00:13:22.191 7843.264 - 7895.904: 1.4535% ( 91) 00:13:22.191 7895.904 - 7948.543: 2.4709% ( 140) 00:13:22.191 7948.543 - 8001.182: 3.7645% ( 178) 00:13:22.191 8001.182 - 8053.822: 5.4433% ( 231) 00:13:22.191 8053.822 - 8106.461: 7.3547% ( 263) 00:13:22.191 8106.461 - 8159.100: 9.5858% ( 307) 00:13:22.191 8159.100 - 8211.740: 12.0858% ( 344) 00:13:22.191 8211.740 - 8264.379: 14.8910% ( 386) 00:13:22.191 8264.379 - 8317.018: 17.9797% ( 425) 00:13:22.191 8317.018 - 8369.658: 21.2064% ( 444) 00:13:22.191 8369.658 - 8422.297: 24.6148% ( 469) 00:13:22.191 8422.297 - 8474.937: 28.1977% ( 493) 00:13:22.191 8474.937 - 8527.576: 31.8968% ( 509) 00:13:22.191 8527.576 - 8580.215: 35.6831% ( 521) 00:13:22.191 8580.215 - 8632.855: 39.6294% ( 543) 00:13:22.191 8632.855 - 8685.494: 43.2558% ( 499) 00:13:22.191 8685.494 - 8738.133: 46.8750% ( 498) 00:13:22.191 8738.133 - 8790.773: 50.3270% ( 475) 00:13:22.191 8790.773 - 8843.412: 53.5683% ( 446) 00:13:22.191 8843.412 - 8896.051: 56.7006% ( 431) 00:13:22.191 8896.051 - 8948.691: 59.6584% ( 407) 00:13:22.191 8948.691 - 9001.330: 62.4491% ( 384) 00:13:22.191 9001.330 - 9053.969: 65.1163% ( 367) 00:13:22.191 9053.969 - 9106.609: 67.5945% ( 341) 00:13:22.191 9106.609 - 9159.248: 69.7820% ( 301) 00:13:22.191 9159.248 - 9211.888: 71.8750% ( 288) 00:13:22.191 9211.888 - 9264.527: 73.9172% ( 281) 00:13:22.191 9264.527 - 9317.166: 75.7631% ( 254) 00:13:22.191 9317.166 - 9369.806: 77.4709% ( 235) 00:13:22.191 9369.806 - 9422.445: 78.9898% ( 209) 00:13:22.191 9422.445 - 9475.084: 80.4651% ( 203) 00:13:22.191 9475.084 - 9527.724: 81.8241% ( 187) 00:13:22.191 9527.724 - 9580.363: 83.0305% ( 166) 00:13:22.191 9580.363 - 9633.002: 84.1860% ( 159) 00:13:22.191 9633.002 - 9685.642: 85.2980% ( 153) 00:13:22.191 9685.642 - 9738.281: 86.3009% ( 138) 00:13:22.191 9738.281 - 9790.920: 87.0567% ( 104) 00:13:22.191 9790.920 - 9843.560: 87.8052% ( 103) 00:13:22.191 9843.560 - 9896.199: 88.4593% ( 90) 00:13:22.191 9896.199 - 9948.839: 89.0262% ( 78) 00:13:22.191 9948.839 - 10001.478: 89.5785% ( 76) 00:13:22.191 10001.478 - 10054.117: 90.1235% ( 75) 00:13:22.191 10054.117 - 10106.757: 90.6105% ( 67) 00:13:22.191 10106.757 - 10159.396: 91.1119% ( 69) 00:13:22.191 10159.396 - 10212.035: 91.4898% ( 52) 00:13:22.191 10212.035 - 10264.675: 91.8314% ( 47) 00:13:22.191 10264.675 - 10317.314: 92.1439% ( 43) 00:13:22.191 10317.314 - 10369.953: 92.4346% ( 40) 00:13:22.191 10369.953 - 10422.593: 92.6890% ( 35) 00:13:22.191 10422.593 - 10475.232: 92.9360% ( 34) 00:13:22.191 10475.232 - 10527.871: 93.1468% ( 29) 00:13:22.191 10527.871 - 10580.511: 93.3503% ( 28) 00:13:22.191 10580.511 - 10633.150: 93.5828% ( 32) 00:13:22.191 10633.150 - 10685.790: 93.8009% ( 30) 00:13:22.191 10685.790 - 10738.429: 93.9898% ( 26) 00:13:22.191 10738.429 - 10791.068: 94.1570% ( 23) 00:13:22.191 10791.068 - 10843.708: 94.3387% ( 25) 00:13:22.191 10843.708 - 10896.347: 94.4840% ( 20) 00:13:22.191 10896.347 - 10948.986: 94.6439% ( 22) 00:13:22.191 10948.986 - 11001.626: 94.7820% ( 19) 00:13:22.191 11001.626 - 11054.265: 94.9055% ( 17) 00:13:22.191 11054.265 - 11106.904: 95.0291% ( 17) 00:13:22.191 11106.904 - 11159.544: 95.1599% ( 18) 00:13:22.191 11159.544 - 11212.183: 95.2907% ( 18) 00:13:22.191 11212.183 - 11264.822: 95.4360% ( 20) 00:13:22.191 11264.822 - 11317.462: 95.5669% ( 18) 00:13:22.191 11317.462 - 11370.101: 95.6759% ( 15) 00:13:22.191 11370.101 - 11422.741: 95.7776% ( 14) 00:13:22.191 11422.741 - 11475.380: 95.8648% ( 12) 00:13:22.191 11475.380 - 11528.019: 95.9520% ( 12) 00:13:22.191 11528.019 - 11580.659: 96.0320% ( 11) 00:13:22.191 11580.659 - 11633.298: 96.1265% ( 13) 00:13:22.191 11633.298 - 11685.937: 96.1846% ( 8) 00:13:22.191 11685.937 - 11738.577: 96.2573% ( 10) 00:13:22.191 11738.577 - 11791.216: 96.3009% ( 6) 00:13:22.191 11791.216 - 11843.855: 96.3663% ( 9) 00:13:22.191 11843.855 - 11896.495: 96.4390% ( 10) 00:13:22.191 11896.495 - 11949.134: 96.4971% ( 8) 00:13:22.191 11949.134 - 12001.773: 96.5698% ( 10) 00:13:22.191 12001.773 - 12054.413: 96.6352% ( 9) 00:13:22.191 12054.413 - 12107.052: 96.6715% ( 5) 00:13:22.191 12107.052 - 12159.692: 96.7078% ( 5) 00:13:22.191 12159.692 - 12212.331: 96.7660% ( 8) 00:13:22.191 12212.331 - 12264.970: 96.8023% ( 5) 00:13:22.191 12264.970 - 12317.610: 96.8459% ( 6) 00:13:22.191 12317.610 - 12370.249: 96.8968% ( 7) 00:13:22.191 12370.249 - 12422.888: 96.9477% ( 7) 00:13:22.191 12422.888 - 12475.528: 96.9985% ( 7) 00:13:22.191 12475.528 - 12528.167: 97.0640% ( 9) 00:13:22.191 12528.167 - 12580.806: 97.1221% ( 8) 00:13:22.191 12580.806 - 12633.446: 97.1802% ( 8) 00:13:22.191 12633.446 - 12686.085: 97.2311% ( 7) 00:13:22.191 12686.085 - 12738.724: 97.2892% ( 8) 00:13:22.191 12738.724 - 12791.364: 97.3328% ( 6) 00:13:22.191 12791.364 - 12844.003: 97.3692% ( 5) 00:13:22.191 12844.003 - 12896.643: 97.4128% ( 6) 00:13:22.191 12896.643 - 12949.282: 97.4564% ( 6) 00:13:22.191 12949.282 - 13001.921: 97.5073% ( 7) 00:13:22.191 13001.921 - 13054.561: 97.5581% ( 7) 00:13:22.191 13054.561 - 13107.200: 97.6017% ( 6) 00:13:22.191 13107.200 - 13159.839: 97.6672% ( 9) 00:13:22.191 13159.839 - 13212.479: 97.7253% ( 8) 00:13:22.191 13212.479 - 13265.118: 97.7689% ( 6) 00:13:22.191 13265.118 - 13317.757: 97.8270% ( 8) 00:13:22.191 13317.757 - 13370.397: 97.8852% ( 8) 00:13:22.191 13370.397 - 13423.036: 97.9288% ( 6) 00:13:22.191 13423.036 - 13475.676: 97.9869% ( 8) 00:13:22.191 13475.676 - 13580.954: 98.0741% ( 12) 00:13:22.191 13580.954 - 13686.233: 98.1468% ( 10) 00:13:22.191 13686.233 - 13791.512: 98.2340% ( 12) 00:13:22.191 13791.512 - 13896.790: 98.3140% ( 11) 00:13:22.191 13896.790 - 14002.069: 98.3939% ( 11) 00:13:22.191 14002.069 - 14107.348: 98.4520% ( 8) 00:13:22.191 14107.348 - 14212.627: 98.4884% ( 5) 00:13:22.191 14212.627 - 14317.905: 98.5174% ( 4) 00:13:22.191 14317.905 - 14423.184: 98.5465% ( 4) 00:13:22.191 14423.184 - 14528.463: 98.5828% ( 5) 00:13:22.191 14528.463 - 14633.741: 98.6047% ( 3) 00:13:22.191 14633.741 - 14739.020: 98.6265% ( 3) 00:13:22.191 14739.020 - 14844.299: 98.6555% ( 4) 00:13:22.191 14844.299 - 14949.578: 98.6846% ( 4) 00:13:22.191 14949.578 - 15054.856: 98.7137% ( 4) 00:13:22.191 15054.856 - 15160.135: 98.7355% ( 3) 00:13:22.191 15160.135 - 15265.414: 98.7573% ( 3) 00:13:22.191 15265.414 - 15370.692: 98.7791% ( 3) 00:13:22.191 15370.692 - 15475.971: 98.8081% ( 4) 00:13:22.191 15475.971 - 15581.250: 98.8299% ( 3) 00:13:22.191 15581.250 - 15686.529: 98.8590% ( 4) 00:13:22.191 15686.529 - 15791.807: 98.8881% ( 4) 00:13:22.191 15791.807 - 15897.086: 98.9099% ( 3) 00:13:22.191 15897.086 - 16002.365: 98.9317% ( 3) 00:13:22.191 16002.365 - 16107.643: 98.9608% ( 4) 00:13:22.191 16107.643 - 16212.922: 98.9826% ( 3) 00:13:22.191 16212.922 - 16318.201: 99.0116% ( 4) 00:13:22.191 16318.201 - 16423.480: 99.0407% ( 4) 00:13:22.191 16423.480 - 16528.758: 99.0625% ( 3) 00:13:22.191 16528.758 - 16634.037: 99.0698% ( 1) 00:13:22.191 36005.320 - 36215.878: 99.0843% ( 2) 00:13:22.191 36215.878 - 36426.435: 99.1424% ( 8) 00:13:22.191 36426.435 - 36636.993: 99.1933% ( 7) 00:13:22.191 36636.993 - 36847.550: 99.2442% ( 7) 00:13:22.191 36847.550 - 37058.108: 99.3023% ( 8) 00:13:22.191 37058.108 - 37268.665: 99.3532% ( 7) 00:13:22.191 37268.665 - 37479.222: 99.4041% ( 7) 00:13:22.191 37479.222 - 37689.780: 99.4622% ( 8) 00:13:22.191 37689.780 - 37900.337: 99.5058% ( 6) 00:13:22.191 37900.337 - 38110.895: 99.5349% ( 4) 00:13:22.191 42743.158 - 42953.716: 99.5785% ( 6) 00:13:22.191 42953.716 - 43164.273: 99.6366% ( 8) 00:13:22.191 43164.273 - 43374.831: 99.6875% ( 7) 00:13:22.191 43374.831 - 43585.388: 99.7384% ( 7) 00:13:22.191 43585.388 - 43795.945: 99.7965% ( 8) 00:13:22.191 43795.945 - 44006.503: 99.8547% ( 8) 00:13:22.191 44006.503 - 44217.060: 99.9128% ( 8) 00:13:22.191 44217.060 - 44427.618: 99.9637% ( 7) 00:13:22.191 44427.618 - 44638.175: 100.0000% ( 5) 00:13:22.191 00:13:22.191 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:13:22.191 ============================================================================== 00:13:22.191 Range in us Cumulative IO count 00:13:22.191 7632.707 - 7685.346: 0.0073% ( 1) 00:13:22.191 7685.346 - 7737.986: 0.0945% ( 12) 00:13:22.191 7737.986 - 7790.625: 0.3488% ( 35) 00:13:22.191 7790.625 - 7843.264: 0.9302% ( 80) 00:13:22.191 7843.264 - 7895.904: 1.6715% ( 102) 00:13:22.191 7895.904 - 7948.543: 2.7907% ( 154) 00:13:22.191 7948.543 - 8001.182: 4.0189% ( 169) 00:13:22.191 8001.182 - 8053.822: 5.6105% ( 219) 00:13:22.191 8053.822 - 8106.461: 7.4564% ( 254) 00:13:22.191 8106.461 - 8159.100: 9.7965% ( 322) 00:13:22.191 8159.100 - 8211.740: 12.2166% ( 333) 00:13:22.191 8211.740 - 8264.379: 15.0727% ( 393) 00:13:22.191 8264.379 - 8317.018: 18.1395% ( 422) 00:13:22.191 8317.018 - 8369.658: 21.2863% ( 433) 00:13:22.191 8369.658 - 8422.297: 24.7384% ( 475) 00:13:22.191 8422.297 - 8474.937: 28.3721% ( 500) 00:13:22.191 8474.937 - 8527.576: 32.0276% ( 503) 00:13:22.191 8527.576 - 8580.215: 35.8212% ( 522) 00:13:22.191 8580.215 - 8632.855: 39.7965% ( 547) 00:13:22.191 8632.855 - 8685.494: 43.4811% ( 507) 00:13:22.191 8685.494 - 8738.133: 47.0640% ( 493) 00:13:22.191 8738.133 - 8790.773: 50.5887% ( 485) 00:13:22.191 8790.773 - 8843.412: 53.9317% ( 460) 00:13:22.191 8843.412 - 8896.051: 57.0785% ( 433) 00:13:22.191 8896.051 - 8948.691: 60.0291% ( 406) 00:13:22.191 8948.691 - 9001.330: 62.7834% ( 379) 00:13:22.191 9001.330 - 9053.969: 65.1672% ( 328) 00:13:22.191 9053.969 - 9106.609: 67.4201% ( 310) 00:13:22.191 9106.609 - 9159.248: 69.5785% ( 297) 00:13:22.191 9159.248 - 9211.888: 71.6061% ( 279) 00:13:22.191 9211.888 - 9264.527: 73.6483% ( 281) 00:13:22.191 9264.527 - 9317.166: 75.5087% ( 256) 00:13:22.191 9317.166 - 9369.806: 77.2238% ( 236) 00:13:22.191 9369.806 - 9422.445: 78.7718% ( 213) 00:13:22.191 9422.445 - 9475.084: 80.2544% ( 204) 00:13:22.191 9475.084 - 9527.724: 81.6497% ( 192) 00:13:22.192 9527.724 - 9580.363: 82.9797% ( 183) 00:13:22.192 9580.363 - 9633.002: 84.2006% ( 168) 00:13:22.192 9633.002 - 9685.642: 85.2616% ( 146) 00:13:22.192 9685.642 - 9738.281: 86.2718% ( 139) 00:13:22.192 9738.281 - 9790.920: 87.0930% ( 113) 00:13:22.192 9790.920 - 9843.560: 87.9360% ( 116) 00:13:22.192 9843.560 - 9896.199: 88.6701% ( 101) 00:13:22.192 9896.199 - 9948.839: 89.3750% ( 97) 00:13:22.192 9948.839 - 10001.478: 89.9564% ( 80) 00:13:22.192 10001.478 - 10054.117: 90.4433% ( 67) 00:13:22.192 10054.117 - 10106.757: 90.8285% ( 53) 00:13:22.192 10106.757 - 10159.396: 91.2500% ( 58) 00:13:22.192 10159.396 - 10212.035: 91.5770% ( 45) 00:13:22.192 10212.035 - 10264.675: 91.8968% ( 44) 00:13:22.192 10264.675 - 10317.314: 92.1730% ( 38) 00:13:22.192 10317.314 - 10369.953: 92.3692% ( 27) 00:13:22.192 10369.953 - 10422.593: 92.5509% ( 25) 00:13:22.192 10422.593 - 10475.232: 92.7035% ( 21) 00:13:22.192 10475.232 - 10527.871: 92.8198% ( 16) 00:13:22.192 10527.871 - 10580.511: 93.0015% ( 25) 00:13:22.192 10580.511 - 10633.150: 93.2049% ( 28) 00:13:22.192 10633.150 - 10685.790: 93.4012% ( 27) 00:13:22.192 10685.790 - 10738.429: 93.6047% ( 28) 00:13:22.192 10738.429 - 10791.068: 93.7645% ( 22) 00:13:22.192 10791.068 - 10843.708: 93.9172% ( 21) 00:13:22.192 10843.708 - 10896.347: 94.0698% ( 21) 00:13:22.192 10896.347 - 10948.986: 94.1788% ( 15) 00:13:22.192 10948.986 - 11001.626: 94.3096% ( 18) 00:13:22.192 11001.626 - 11054.265: 94.4840% ( 24) 00:13:22.192 11054.265 - 11106.904: 94.6294% ( 20) 00:13:22.192 11106.904 - 11159.544: 94.7602% ( 18) 00:13:22.192 11159.544 - 11212.183: 94.8910% ( 18) 00:13:22.192 11212.183 - 11264.822: 95.0291% ( 19) 00:13:22.192 11264.822 - 11317.462: 95.1672% ( 19) 00:13:22.192 11317.462 - 11370.101: 95.2834% ( 16) 00:13:22.192 11370.101 - 11422.741: 95.4142% ( 18) 00:13:22.192 11422.741 - 11475.380: 95.5087% ( 13) 00:13:22.192 11475.380 - 11528.019: 95.6177% ( 15) 00:13:22.192 11528.019 - 11580.659: 95.7122% ( 13) 00:13:22.192 11580.659 - 11633.298: 95.8067% ( 13) 00:13:22.192 11633.298 - 11685.937: 95.9012% ( 13) 00:13:22.192 11685.937 - 11738.577: 95.9884% ( 12) 00:13:22.192 11738.577 - 11791.216: 96.1047% ( 16) 00:13:22.192 11791.216 - 11843.855: 96.2064% ( 14) 00:13:22.192 11843.855 - 11896.495: 96.3009% ( 13) 00:13:22.192 11896.495 - 11949.134: 96.3953% ( 13) 00:13:22.192 11949.134 - 12001.773: 96.4971% ( 14) 00:13:22.192 12001.773 - 12054.413: 96.5988% ( 14) 00:13:22.192 12054.413 - 12107.052: 96.7006% ( 14) 00:13:22.192 12107.052 - 12159.692: 96.7733% ( 10) 00:13:22.192 12159.692 - 12212.331: 96.8459% ( 10) 00:13:22.192 12212.331 - 12264.970: 96.9186% ( 10) 00:13:22.192 12264.970 - 12317.610: 96.9767% ( 8) 00:13:22.192 12317.610 - 12370.249: 97.0349% ( 8) 00:13:22.192 12370.249 - 12422.888: 97.0858% ( 7) 00:13:22.192 12422.888 - 12475.528: 97.1439% ( 8) 00:13:22.192 12475.528 - 12528.167: 97.1802% ( 5) 00:13:22.192 12528.167 - 12580.806: 97.2093% ( 4) 00:13:22.192 12580.806 - 12633.446: 97.2384% ( 4) 00:13:22.192 12633.446 - 12686.085: 97.2674% ( 4) 00:13:22.192 12686.085 - 12738.724: 97.2892% ( 3) 00:13:22.192 12738.724 - 12791.364: 97.3110% ( 3) 00:13:22.192 12791.364 - 12844.003: 97.3547% ( 6) 00:13:22.192 12844.003 - 12896.643: 97.3910% ( 5) 00:13:22.192 12896.643 - 12949.282: 97.4273% ( 5) 00:13:22.192 12949.282 - 13001.921: 97.4782% ( 7) 00:13:22.192 13001.921 - 13054.561: 97.5073% ( 4) 00:13:22.192 13054.561 - 13107.200: 97.5509% ( 6) 00:13:22.192 13107.200 - 13159.839: 97.5945% ( 6) 00:13:22.192 13159.839 - 13212.479: 97.6381% ( 6) 00:13:22.192 13212.479 - 13265.118: 97.6672% ( 4) 00:13:22.192 13265.118 - 13317.757: 97.7108% ( 6) 00:13:22.192 13317.757 - 13370.397: 97.7471% ( 5) 00:13:22.192 13370.397 - 13423.036: 97.7762% ( 4) 00:13:22.192 13423.036 - 13475.676: 97.8052% ( 4) 00:13:22.192 13475.676 - 13580.954: 97.8634% ( 8) 00:13:22.192 13580.954 - 13686.233: 97.9433% ( 11) 00:13:22.192 13686.233 - 13791.512: 97.9942% ( 7) 00:13:22.192 13791.512 - 13896.790: 98.0523% ( 8) 00:13:22.192 13896.790 - 14002.069: 98.1105% ( 8) 00:13:22.192 14002.069 - 14107.348: 98.1831% ( 10) 00:13:22.192 14107.348 - 14212.627: 98.2703% ( 12) 00:13:22.192 14212.627 - 14317.905: 98.3576% ( 12) 00:13:22.192 14317.905 - 14423.184: 98.4302% ( 10) 00:13:22.192 14423.184 - 14528.463: 98.5102% ( 11) 00:13:22.192 14528.463 - 14633.741: 98.5610% ( 7) 00:13:22.192 14633.741 - 14739.020: 98.6047% ( 6) 00:13:22.192 14739.020 - 14844.299: 98.6628% ( 8) 00:13:22.192 14844.299 - 14949.578: 98.7137% ( 7) 00:13:22.192 14949.578 - 15054.856: 98.7791% ( 9) 00:13:22.192 15054.856 - 15160.135: 98.8299% ( 7) 00:13:22.192 15160.135 - 15265.414: 98.8735% ( 6) 00:13:22.192 15265.414 - 15370.692: 98.8953% ( 3) 00:13:22.192 15370.692 - 15475.971: 98.9244% ( 4) 00:13:22.192 15475.971 - 15581.250: 98.9462% ( 3) 00:13:22.192 15581.250 - 15686.529: 98.9680% ( 3) 00:13:22.192 15686.529 - 15791.807: 98.9898% ( 3) 00:13:22.192 15791.807 - 15897.086: 99.0189% ( 4) 00:13:22.192 15897.086 - 16002.365: 99.0407% ( 3) 00:13:22.192 16002.365 - 16107.643: 99.0698% ( 4) 00:13:22.192 34110.304 - 34320.861: 99.1206% ( 7) 00:13:22.192 34320.861 - 34531.418: 99.1715% ( 7) 00:13:22.192 34531.418 - 34741.976: 99.2151% ( 6) 00:13:22.192 34741.976 - 34952.533: 99.2660% ( 7) 00:13:22.192 34952.533 - 35163.091: 99.3241% ( 8) 00:13:22.192 35163.091 - 35373.648: 99.3750% ( 7) 00:13:22.192 35373.648 - 35584.206: 99.4331% ( 8) 00:13:22.192 35584.206 - 35794.763: 99.4840% ( 7) 00:13:22.192 35794.763 - 36005.320: 99.5349% ( 7) 00:13:22.192 40427.027 - 40637.584: 99.5567% ( 3) 00:13:22.192 40637.584 - 40848.141: 99.6148% ( 8) 00:13:22.192 40848.141 - 41058.699: 99.6584% ( 6) 00:13:22.192 41058.699 - 41269.256: 99.7093% ( 7) 00:13:22.192 41269.256 - 41479.814: 99.7602% ( 7) 00:13:22.192 41479.814 - 41690.371: 99.8110% ( 7) 00:13:22.192 41690.371 - 41900.929: 99.8619% ( 7) 00:13:22.192 41900.929 - 42111.486: 99.9201% ( 8) 00:13:22.192 42111.486 - 42322.043: 99.9709% ( 7) 00:13:22.192 42322.043 - 42532.601: 100.0000% ( 4) 00:13:22.192 00:13:22.192 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:13:22.192 ============================================================================== 00:13:22.192 Range in us Cumulative IO count 00:13:22.192 7685.346 - 7737.986: 0.0868% ( 12) 00:13:22.192 7737.986 - 7790.625: 0.3689% ( 39) 00:13:22.192 7790.625 - 7843.264: 0.8608% ( 68) 00:13:22.192 7843.264 - 7895.904: 1.6276% ( 106) 00:13:22.192 7895.904 - 7948.543: 2.6042% ( 135) 00:13:22.192 7948.543 - 8001.182: 3.7688% ( 161) 00:13:22.192 8001.182 - 8053.822: 5.2373% ( 203) 00:13:22.192 8053.822 - 8106.461: 7.2049% ( 272) 00:13:22.192 8106.461 - 8159.100: 9.4546% ( 311) 00:13:22.192 8159.100 - 8211.740: 12.0732% ( 362) 00:13:22.192 8211.740 - 8264.379: 14.8220% ( 380) 00:13:22.192 8264.379 - 8317.018: 17.8313% ( 416) 00:13:22.192 8317.018 - 8369.658: 21.0720% ( 448) 00:13:22.192 8369.658 - 8422.297: 24.4502% ( 467) 00:13:22.192 8422.297 - 8474.937: 27.9152% ( 479) 00:13:22.192 8474.937 - 8527.576: 31.6045% ( 510) 00:13:22.192 8527.576 - 8580.215: 35.4311% ( 529) 00:13:22.192 8580.215 - 8632.855: 39.3446% ( 541) 00:13:22.192 8632.855 - 8685.494: 43.0266% ( 509) 00:13:22.192 8685.494 - 8738.133: 46.6218% ( 497) 00:13:22.192 8738.133 - 8790.773: 50.0868% ( 479) 00:13:22.192 8790.773 - 8843.412: 53.4288% ( 462) 00:13:22.192 8843.412 - 8896.051: 56.8793% ( 477) 00:13:22.192 8896.051 - 8948.691: 59.8958% ( 417) 00:13:22.192 8948.691 - 9001.330: 62.5362% ( 365) 00:13:22.192 9001.330 - 9053.969: 64.9161% ( 329) 00:13:22.192 9053.969 - 9106.609: 67.1513% ( 309) 00:13:22.192 9106.609 - 9159.248: 69.3866% ( 309) 00:13:22.192 9159.248 - 9211.888: 71.4771% ( 289) 00:13:22.192 9211.888 - 9264.527: 73.4375% ( 271) 00:13:22.192 9264.527 - 9317.166: 75.3038% ( 258) 00:13:22.192 9317.166 - 9369.806: 77.1267% ( 252) 00:13:22.192 9369.806 - 9422.445: 78.6314% ( 208) 00:13:22.192 9422.445 - 9475.084: 79.9986% ( 189) 00:13:22.192 9475.084 - 9527.724: 81.3079% ( 181) 00:13:22.192 9527.724 - 9580.363: 82.5448% ( 171) 00:13:22.192 9580.363 - 9633.002: 83.7312% ( 164) 00:13:22.192 9633.002 - 9685.642: 84.8235% ( 151) 00:13:22.192 9685.642 - 9738.281: 85.7350% ( 126) 00:13:22.193 9738.281 - 9790.920: 86.5596% ( 114) 00:13:22.193 9790.920 - 9843.560: 87.3987% ( 116) 00:13:22.193 9843.560 - 9896.199: 88.1944% ( 110) 00:13:22.193 9896.199 - 9948.839: 88.9685% ( 107) 00:13:22.193 9948.839 - 10001.478: 89.6846% ( 99) 00:13:22.193 10001.478 - 10054.117: 90.2488% ( 78) 00:13:22.193 10054.117 - 10106.757: 90.7407% ( 68) 00:13:22.193 10106.757 - 10159.396: 91.1241% ( 53) 00:13:22.193 10159.396 - 10212.035: 91.5148% ( 54) 00:13:22.193 10212.035 - 10264.675: 91.8258% ( 43) 00:13:22.193 10264.675 - 10317.314: 92.1079% ( 39) 00:13:22.193 10317.314 - 10369.953: 92.3683% ( 36) 00:13:22.193 10369.953 - 10422.593: 92.5781% ( 29) 00:13:22.193 10422.593 - 10475.232: 92.7662% ( 26) 00:13:22.193 10475.232 - 10527.871: 92.9688% ( 28) 00:13:22.193 10527.871 - 10580.511: 93.1713% ( 28) 00:13:22.193 10580.511 - 10633.150: 93.3594% ( 26) 00:13:22.193 10633.150 - 10685.790: 93.5258% ( 23) 00:13:22.193 10685.790 - 10738.429: 93.6921% ( 23) 00:13:22.193 10738.429 - 10791.068: 93.8368% ( 20) 00:13:22.193 10791.068 - 10843.708: 93.9670% ( 18) 00:13:22.193 10843.708 - 10896.347: 94.0828% ( 16) 00:13:22.193 10896.347 - 10948.986: 94.2057% ( 17) 00:13:22.193 10948.986 - 11001.626: 94.3432% ( 19) 00:13:22.193 11001.626 - 11054.265: 94.4661% ( 17) 00:13:22.193 11054.265 - 11106.904: 94.6253% ( 22) 00:13:22.193 11106.904 - 11159.544: 94.7338% ( 15) 00:13:22.193 11159.544 - 11212.183: 94.8351% ( 14) 00:13:22.193 11212.183 - 11264.822: 94.9363% ( 14) 00:13:22.193 11264.822 - 11317.462: 95.0521% ( 16) 00:13:22.193 11317.462 - 11370.101: 95.1751% ( 17) 00:13:22.193 11370.101 - 11422.741: 95.2763% ( 14) 00:13:22.193 11422.741 - 11475.380: 95.3776% ( 14) 00:13:22.193 11475.380 - 11528.019: 95.4716% ( 13) 00:13:22.193 11528.019 - 11580.659: 95.5512% ( 11) 00:13:22.193 11580.659 - 11633.298: 95.6742% ( 17) 00:13:22.193 11633.298 - 11685.937: 95.7827% ( 15) 00:13:22.193 11685.937 - 11738.577: 95.8840% ( 14) 00:13:22.193 11738.577 - 11791.216: 95.9852% ( 14) 00:13:22.193 11791.216 - 11843.855: 96.0793% ( 13) 00:13:22.193 11843.855 - 11896.495: 96.1806% ( 14) 00:13:22.193 11896.495 - 11949.134: 96.2891% ( 15) 00:13:22.193 11949.134 - 12001.773: 96.3976% ( 15) 00:13:22.193 12001.773 - 12054.413: 96.4844% ( 12) 00:13:22.193 12054.413 - 12107.052: 96.5567% ( 10) 00:13:22.193 12107.052 - 12159.692: 96.6291% ( 10) 00:13:22.193 12159.692 - 12212.331: 96.7159% ( 12) 00:13:22.193 12212.331 - 12264.970: 96.7810% ( 9) 00:13:22.193 12264.970 - 12317.610: 96.8678% ( 12) 00:13:22.193 12317.610 - 12370.249: 96.9401% ( 10) 00:13:22.193 12370.249 - 12422.888: 97.0124% ( 10) 00:13:22.193 12422.888 - 12475.528: 97.0920% ( 11) 00:13:22.193 12475.528 - 12528.167: 97.1644% ( 10) 00:13:22.193 12528.167 - 12580.806: 97.2439% ( 11) 00:13:22.193 12580.806 - 12633.446: 97.2946% ( 7) 00:13:22.193 12633.446 - 12686.085: 97.3597% ( 9) 00:13:22.193 12686.085 - 12738.724: 97.4103% ( 7) 00:13:22.193 12738.724 - 12791.364: 97.4609% ( 7) 00:13:22.193 12791.364 - 12844.003: 97.4826% ( 3) 00:13:22.193 12844.003 - 12896.643: 97.5116% ( 4) 00:13:22.193 12896.643 - 12949.282: 97.5333% ( 3) 00:13:22.193 12949.282 - 13001.921: 97.5622% ( 4) 00:13:22.193 13001.921 - 13054.561: 97.5839% ( 3) 00:13:22.193 13054.561 - 13107.200: 97.6056% ( 3) 00:13:22.193 13107.200 - 13159.839: 97.6345% ( 4) 00:13:22.193 13159.839 - 13212.479: 97.6635% ( 4) 00:13:22.193 13212.479 - 13265.118: 97.6780% ( 2) 00:13:22.193 13265.118 - 13317.757: 97.7069% ( 4) 00:13:22.193 13317.757 - 13370.397: 97.7286% ( 3) 00:13:22.193 13370.397 - 13423.036: 97.7431% ( 2) 00:13:22.193 13423.036 - 13475.676: 97.7575% ( 2) 00:13:22.193 13475.676 - 13580.954: 97.7865% ( 4) 00:13:22.193 13580.954 - 13686.233: 97.8516% ( 9) 00:13:22.193 13686.233 - 13791.512: 97.9022% ( 7) 00:13:22.193 13791.512 - 13896.790: 97.9528% ( 7) 00:13:22.193 13896.790 - 14002.069: 98.0035% ( 7) 00:13:22.193 14002.069 - 14107.348: 98.0686% ( 9) 00:13:22.193 14107.348 - 14212.627: 98.1554% ( 12) 00:13:22.193 14212.627 - 14317.905: 98.2350% ( 11) 00:13:22.193 14317.905 - 14423.184: 98.3145% ( 11) 00:13:22.193 14423.184 - 14528.463: 98.4086% ( 13) 00:13:22.193 14528.463 - 14633.741: 98.4881% ( 11) 00:13:22.193 14633.741 - 14739.020: 98.5749% ( 12) 00:13:22.193 14739.020 - 14844.299: 98.6545% ( 11) 00:13:22.193 14844.299 - 14949.578: 98.7124% ( 8) 00:13:22.193 14949.578 - 15054.856: 98.7775% ( 9) 00:13:22.193 15054.856 - 15160.135: 98.8354% ( 8) 00:13:22.193 15160.135 - 15265.414: 98.8860% ( 7) 00:13:22.193 15265.414 - 15370.692: 98.9439% ( 8) 00:13:22.193 15370.692 - 15475.971: 98.9945% ( 7) 00:13:22.193 15475.971 - 15581.250: 99.0451% ( 7) 00:13:22.193 15581.250 - 15686.529: 99.0741% ( 4) 00:13:22.193 26846.072 - 26951.351: 99.0885% ( 2) 00:13:22.193 26951.351 - 27161.908: 99.1247% ( 5) 00:13:22.193 27161.908 - 27372.466: 99.1826% ( 8) 00:13:22.193 27372.466 - 27583.023: 99.2405% ( 8) 00:13:22.193 27583.023 - 27793.581: 99.2839% ( 6) 00:13:22.193 27793.581 - 28004.138: 99.3417% ( 8) 00:13:22.193 28004.138 - 28214.696: 99.4068% ( 9) 00:13:22.193 28214.696 - 28425.253: 99.4575% ( 7) 00:13:22.193 28425.253 - 28635.810: 99.5153% ( 8) 00:13:22.193 28635.810 - 28846.368: 99.5370% ( 3) 00:13:22.193 33268.074 - 33478.631: 99.5443% ( 1) 00:13:22.193 33478.631 - 33689.189: 99.5949% ( 7) 00:13:22.193 33689.189 - 33899.746: 99.6528% ( 8) 00:13:22.193 33899.746 - 34110.304: 99.7034% ( 7) 00:13:22.193 34110.304 - 34320.861: 99.7613% ( 8) 00:13:22.193 34320.861 - 34531.418: 99.8119% ( 7) 00:13:22.193 34531.418 - 34741.976: 99.8626% ( 7) 00:13:22.193 34741.976 - 34952.533: 99.9132% ( 7) 00:13:22.193 34952.533 - 35163.091: 99.9638% ( 7) 00:13:22.193 35163.091 - 35373.648: 100.0000% ( 5) 00:13:22.193 00:13:22.193 11:24:03 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:13:23.572 Initializing NVMe Controllers 00:13:23.572 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:23.572 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:23.572 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:23.572 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:23.572 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:13:23.572 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:13:23.572 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:13:23.572 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:13:23.572 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:13:23.572 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:13:23.572 Initialization complete. Launching workers. 00:13:23.572 ======================================================== 00:13:23.572 Latency(us) 00:13:23.572 Device Information : IOPS MiB/s Average min max 00:13:23.572 PCIE (0000:00:10.0) NSID 1 from core 0: 9071.32 106.30 14175.50 9167.22 46287.40 00:13:23.572 PCIE (0000:00:11.0) NSID 1 from core 0: 9071.32 106.30 14162.58 9390.81 44883.85 00:13:23.572 PCIE (0000:00:13.0) NSID 1 from core 0: 9071.32 106.30 14147.94 9462.20 43966.19 00:13:23.572 PCIE (0000:00:12.0) NSID 1 from core 0: 9071.32 106.30 14131.39 9273.48 42500.87 00:13:23.572 PCIE (0000:00:12.0) NSID 2 from core 0: 9071.32 106.30 14111.63 9171.97 41227.52 00:13:23.572 PCIE (0000:00:12.0) NSID 3 from core 0: 9071.32 106.30 14090.85 9223.51 39623.85 00:13:23.572 ======================================================== 00:13:23.572 Total : 54427.91 637.83 14136.65 9167.22 46287.40 00:13:23.572 00:13:23.572 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:13:23.572 ================================================================================= 00:13:23.572 1.00000% : 9527.724us 00:13:23.572 10.00000% : 10317.314us 00:13:23.572 25.00000% : 11317.462us 00:13:23.572 50.00000% : 13580.954us 00:13:23.572 75.00000% : 16212.922us 00:13:23.572 90.00000% : 18107.939us 00:13:23.572 95.00000% : 19371.284us 00:13:23.572 98.00000% : 20739.907us 00:13:23.572 99.00000% : 33057.516us 00:13:23.572 99.50000% : 44217.060us 00:13:23.572 99.90000% : 45901.520us 00:13:23.572 99.99000% : 46322.635us 00:13:23.572 99.99900% : 46322.635us 00:13:23.572 99.99990% : 46322.635us 00:13:23.572 99.99999% : 46322.635us 00:13:23.572 00:13:23.572 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:13:23.572 ================================================================================= 00:13:23.572 1.00000% : 9738.281us 00:13:23.572 10.00000% : 10317.314us 00:13:23.572 25.00000% : 11212.183us 00:13:23.572 50.00000% : 13580.954us 00:13:23.572 75.00000% : 16107.643us 00:13:23.572 90.00000% : 18107.939us 00:13:23.572 95.00000% : 18950.169us 00:13:23.572 98.00000% : 20950.464us 00:13:23.572 99.00000% : 32846.959us 00:13:23.572 99.50000% : 43164.273us 00:13:23.572 99.90000% : 44638.175us 00:13:23.572 99.99000% : 45059.290us 00:13:23.572 99.99900% : 45059.290us 00:13:23.572 99.99990% : 45059.290us 00:13:23.572 99.99999% : 45059.290us 00:13:23.572 00:13:23.572 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:13:23.572 ================================================================================= 00:13:23.572 1.00000% : 9790.920us 00:13:23.572 10.00000% : 10317.314us 00:13:23.572 25.00000% : 11264.822us 00:13:23.572 50.00000% : 13791.512us 00:13:23.572 75.00000% : 16002.365us 00:13:23.572 90.00000% : 17897.382us 00:13:23.572 95.00000% : 19160.726us 00:13:23.572 98.00000% : 21161.022us 00:13:23.572 99.00000% : 32636.402us 00:13:23.572 99.50000% : 42322.043us 00:13:23.572 99.90000% : 43795.945us 00:13:23.572 99.99000% : 44006.503us 00:13:23.572 99.99900% : 44006.503us 00:13:23.572 99.99990% : 44006.503us 00:13:23.572 99.99999% : 44006.503us 00:13:23.572 00:13:23.572 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:13:23.572 ================================================================================= 00:13:23.572 1.00000% : 9790.920us 00:13:23.572 10.00000% : 10369.953us 00:13:23.572 25.00000% : 11317.462us 00:13:23.572 50.00000% : 13686.233us 00:13:23.572 75.00000% : 16002.365us 00:13:23.572 90.00000% : 17792.103us 00:13:23.572 95.00000% : 19266.005us 00:13:23.572 98.00000% : 20739.907us 00:13:23.572 99.00000% : 31373.057us 00:13:23.572 99.50000% : 40848.141us 00:13:23.572 99.90000% : 42322.043us 00:13:23.572 99.99000% : 42532.601us 00:13:23.572 99.99900% : 42532.601us 00:13:23.572 99.99990% : 42532.601us 00:13:23.572 99.99999% : 42532.601us 00:13:23.572 00:13:23.572 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:13:23.572 ================================================================================= 00:13:23.572 1.00000% : 9738.281us 00:13:23.572 10.00000% : 10317.314us 00:13:23.572 25.00000% : 11370.101us 00:13:23.572 50.00000% : 13686.233us 00:13:23.572 75.00000% : 16002.365us 00:13:23.572 90.00000% : 17897.382us 00:13:23.572 95.00000% : 19266.005us 00:13:23.572 98.00000% : 20318.792us 00:13:23.572 99.00000% : 29899.155us 00:13:23.572 99.50000% : 39374.239us 00:13:23.572 99.90000% : 41058.699us 00:13:23.572 99.99000% : 41269.256us 00:13:23.572 99.99900% : 41269.256us 00:13:23.572 99.99990% : 41269.256us 00:13:23.572 99.99999% : 41269.256us 00:13:23.572 00:13:23.572 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:13:23.572 ================================================================================= 00:13:23.572 1.00000% : 9738.281us 00:13:23.572 10.00000% : 10264.675us 00:13:23.572 25.00000% : 11370.101us 00:13:23.572 50.00000% : 13580.954us 00:13:23.572 75.00000% : 16107.643us 00:13:23.572 90.00000% : 18002.660us 00:13:23.572 95.00000% : 19160.726us 00:13:23.572 98.00000% : 20318.792us 00:13:23.572 99.00000% : 28425.253us 00:13:23.572 99.50000% : 38110.895us 00:13:23.572 99.90000% : 39374.239us 00:13:23.572 99.99000% : 39795.354us 00:13:23.572 99.99900% : 39795.354us 00:13:23.572 99.99990% : 39795.354us 00:13:23.572 99.99999% : 39795.354us 00:13:23.572 00:13:23.572 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:13:23.572 ============================================================================== 00:13:23.572 Range in us Cumulative IO count 00:13:23.572 9159.248 - 9211.888: 0.0330% ( 3) 00:13:23.572 9211.888 - 9264.527: 0.0990% ( 6) 00:13:23.572 9264.527 - 9317.166: 0.1761% ( 7) 00:13:23.572 9317.166 - 9369.806: 0.3631% ( 17) 00:13:23.572 9369.806 - 9422.445: 0.5722% ( 19) 00:13:23.572 9422.445 - 9475.084: 0.7482% ( 16) 00:13:23.572 9475.084 - 9527.724: 1.0453% ( 27) 00:13:23.572 9527.724 - 9580.363: 1.2434% ( 18) 00:13:23.572 9580.363 - 9633.002: 1.4855% ( 22) 00:13:23.572 9633.002 - 9685.642: 1.7716% ( 26) 00:13:23.572 9685.642 - 9738.281: 2.0797% ( 28) 00:13:23.572 9738.281 - 9790.920: 2.5198% ( 40) 00:13:23.572 9790.920 - 9843.560: 3.2130% ( 63) 00:13:23.572 9843.560 - 9896.199: 4.0603% ( 77) 00:13:23.572 9896.199 - 9948.839: 5.1496% ( 99) 00:13:23.572 9948.839 - 10001.478: 6.0189% ( 79) 00:13:23.572 10001.478 - 10054.117: 6.8442% ( 75) 00:13:23.572 10054.117 - 10106.757: 7.6585% ( 74) 00:13:23.572 10106.757 - 10159.396: 8.4837% ( 75) 00:13:23.572 10159.396 - 10212.035: 8.9569% ( 43) 00:13:23.572 10212.035 - 10264.675: 9.6171% ( 60) 00:13:23.572 10264.675 - 10317.314: 10.2553% ( 58) 00:13:23.572 10317.314 - 10369.953: 11.0145% ( 69) 00:13:23.572 10369.953 - 10422.593: 11.9278% ( 83) 00:13:23.572 10422.593 - 10475.232: 12.9071% ( 89) 00:13:23.572 10475.232 - 10527.871: 13.9855% ( 98) 00:13:23.572 10527.871 - 10580.511: 15.0638% ( 98) 00:13:23.572 10580.511 - 10633.150: 16.1972% ( 103) 00:13:23.572 10633.150 - 10685.790: 17.1985% ( 91) 00:13:23.572 10685.790 - 10738.429: 17.8697% ( 61) 00:13:23.572 10738.429 - 10791.068: 18.4529% ( 53) 00:13:23.572 10791.068 - 10843.708: 19.1461% ( 63) 00:13:23.572 10843.708 - 10896.347: 20.1474% ( 91) 00:13:23.572 10896.347 - 10948.986: 20.7857% ( 58) 00:13:23.572 10948.986 - 11001.626: 21.5889% ( 73) 00:13:23.572 11001.626 - 11054.265: 22.3261% ( 67) 00:13:23.572 11054.265 - 11106.904: 22.9754% ( 59) 00:13:23.572 11106.904 - 11159.544: 23.6796% ( 64) 00:13:23.572 11159.544 - 11212.183: 24.2738% ( 54) 00:13:23.572 11212.183 - 11264.822: 24.9780% ( 64) 00:13:23.572 11264.822 - 11317.462: 25.3961% ( 38) 00:13:23.572 11317.462 - 11370.101: 25.9133% ( 47) 00:13:23.572 11370.101 - 11422.741: 26.3644% ( 41) 00:13:23.572 11422.741 - 11475.380: 26.8816% ( 47) 00:13:23.572 11475.380 - 11528.019: 27.4648% ( 53) 00:13:23.572 11528.019 - 11580.659: 28.1910% ( 66) 00:13:23.572 11580.659 - 11633.298: 28.6862% ( 45) 00:13:23.572 11633.298 - 11685.937: 29.3244% ( 58) 00:13:23.572 11685.937 - 11738.577: 29.9406% ( 56) 00:13:23.572 11738.577 - 11791.216: 30.6778% ( 67) 00:13:23.572 11791.216 - 11843.855: 31.1620% ( 44) 00:13:23.572 11843.855 - 11896.495: 31.7232% ( 51) 00:13:23.572 11896.495 - 11949.134: 32.2953% ( 52) 00:13:23.572 11949.134 - 12001.773: 32.5704% ( 25) 00:13:23.572 12001.773 - 12054.413: 33.0656% ( 45) 00:13:23.572 12054.413 - 12107.052: 33.4947% ( 39) 00:13:23.572 12107.052 - 12159.692: 33.9899% ( 45) 00:13:23.572 12159.692 - 12212.331: 34.4080% ( 38) 00:13:23.572 12212.331 - 12264.970: 34.8592% ( 41) 00:13:23.572 12264.970 - 12317.610: 35.3983% ( 49) 00:13:23.572 12317.610 - 12370.249: 36.0695% ( 61) 00:13:23.572 12370.249 - 12422.888: 36.5097% ( 40) 00:13:23.572 12422.888 - 12475.528: 37.1039% ( 54) 00:13:23.572 12475.528 - 12528.167: 37.8301% ( 66) 00:13:23.572 12528.167 - 12580.806: 38.5453% ( 65) 00:13:23.572 12580.806 - 12633.446: 39.2826% ( 67) 00:13:23.572 12633.446 - 12686.085: 39.9978% ( 65) 00:13:23.572 12686.085 - 12738.724: 40.6580% ( 60) 00:13:23.572 12738.724 - 12791.364: 41.2412% ( 53) 00:13:23.572 12791.364 - 12844.003: 42.1215% ( 80) 00:13:23.572 12844.003 - 12896.643: 42.7707% ( 59) 00:13:23.572 12896.643 - 12949.282: 43.5960% ( 75) 00:13:23.572 12949.282 - 13001.921: 44.2232% ( 57) 00:13:23.572 13001.921 - 13054.561: 45.0044% ( 71) 00:13:23.572 13054.561 - 13107.200: 45.7306% ( 66) 00:13:23.572 13107.200 - 13159.839: 46.4349% ( 64) 00:13:23.572 13159.839 - 13212.479: 46.9190% ( 44) 00:13:23.572 13212.479 - 13265.118: 47.4252% ( 46) 00:13:23.572 13265.118 - 13317.757: 47.9533% ( 48) 00:13:23.572 13317.757 - 13370.397: 48.5255% ( 52) 00:13:23.572 13370.397 - 13423.036: 49.0757% ( 50) 00:13:23.572 13423.036 - 13475.676: 49.5929% ( 47) 00:13:23.572 13475.676 - 13580.954: 50.5942% ( 91) 00:13:23.572 13580.954 - 13686.233: 51.6395% ( 95) 00:13:23.573 13686.233 - 13791.512: 52.6629% ( 93) 00:13:23.573 13791.512 - 13896.790: 53.9283% ( 115) 00:13:23.573 13896.790 - 14002.069: 55.0616% ( 103) 00:13:23.573 14002.069 - 14107.348: 55.9969% ( 85) 00:13:23.573 14107.348 - 14212.627: 56.8992% ( 82) 00:13:23.573 14212.627 - 14317.905: 57.7795% ( 80) 00:13:23.573 14317.905 - 14423.184: 58.4837% ( 64) 00:13:23.573 14423.184 - 14528.463: 59.2540% ( 70) 00:13:23.573 14528.463 - 14633.741: 60.0792% ( 75) 00:13:23.573 14633.741 - 14739.020: 60.9705% ( 81) 00:13:23.573 14739.020 - 14844.299: 62.2469% ( 116) 00:13:23.573 14844.299 - 14949.578: 63.4353% ( 108) 00:13:23.573 14949.578 - 15054.856: 64.4586% ( 93) 00:13:23.573 15054.856 - 15160.135: 65.5370% ( 98) 00:13:23.573 15160.135 - 15265.414: 66.5163% ( 89) 00:13:23.573 15265.414 - 15370.692: 67.5286% ( 92) 00:13:23.573 15370.692 - 15475.971: 68.3429% ( 74) 00:13:23.573 15475.971 - 15581.250: 69.4102% ( 97) 00:13:23.573 15581.250 - 15686.529: 70.3125% ( 82) 00:13:23.573 15686.529 - 15791.807: 71.1708% ( 78) 00:13:23.573 15791.807 - 15897.086: 72.0841% ( 83) 00:13:23.573 15897.086 - 16002.365: 73.0854% ( 91) 00:13:23.573 16002.365 - 16107.643: 73.9877% ( 82) 00:13:23.573 16107.643 - 16212.922: 75.0440% ( 96) 00:13:23.573 16212.922 - 16318.201: 76.2104% ( 106) 00:13:23.573 16318.201 - 16423.480: 77.3878% ( 107) 00:13:23.573 16423.480 - 16528.758: 78.3781% ( 90) 00:13:23.573 16528.758 - 16634.037: 79.3574% ( 89) 00:13:23.573 16634.037 - 16739.316: 80.5788% ( 111) 00:13:23.573 16739.316 - 16844.594: 81.4481% ( 79) 00:13:23.573 16844.594 - 16949.873: 82.3283% ( 80) 00:13:23.573 16949.873 - 17055.152: 83.0986% ( 70) 00:13:23.573 17055.152 - 17160.431: 83.8908% ( 72) 00:13:23.573 17160.431 - 17265.709: 84.7161% ( 75) 00:13:23.573 17265.709 - 17370.988: 85.4203% ( 64) 00:13:23.573 17370.988 - 17476.267: 86.0475% ( 57) 00:13:23.573 17476.267 - 17581.545: 86.7077% ( 60) 00:13:23.573 17581.545 - 17686.824: 87.3460% ( 58) 00:13:23.573 17686.824 - 17792.103: 88.2482% ( 82) 00:13:23.573 17792.103 - 17897.382: 89.2165% ( 88) 00:13:23.573 17897.382 - 18002.660: 89.8107% ( 54) 00:13:23.573 18002.660 - 18107.939: 90.2399% ( 39) 00:13:23.573 18107.939 - 18213.218: 90.7020% ( 42) 00:13:23.573 18213.218 - 18318.496: 91.1532% ( 41) 00:13:23.573 18318.496 - 18423.775: 91.7033% ( 50) 00:13:23.573 18423.775 - 18529.054: 92.2095% ( 46) 00:13:23.573 18529.054 - 18634.333: 92.6937% ( 44) 00:13:23.573 18634.333 - 18739.611: 93.1228% ( 39) 00:13:23.573 18739.611 - 18844.890: 93.4749% ( 32) 00:13:23.573 18844.890 - 18950.169: 93.7610% ( 26) 00:13:23.573 18950.169 - 19055.447: 94.0911% ( 30) 00:13:23.573 19055.447 - 19160.726: 94.5092% ( 38) 00:13:23.573 19160.726 - 19266.005: 94.8944% ( 35) 00:13:23.573 19266.005 - 19371.284: 95.2575% ( 33) 00:13:23.573 19371.284 - 19476.562: 95.5986% ( 31) 00:13:23.573 19476.562 - 19581.841: 96.0057% ( 37) 00:13:23.573 19581.841 - 19687.120: 96.3798% ( 34) 00:13:23.573 19687.120 - 19792.398: 96.6439% ( 24) 00:13:23.573 19792.398 - 19897.677: 96.8640% ( 20) 00:13:23.573 19897.677 - 20002.956: 97.0511% ( 17) 00:13:23.573 20002.956 - 20108.235: 97.2271% ( 16) 00:13:23.573 20108.235 - 20213.513: 97.4472% ( 20) 00:13:23.573 20213.513 - 20318.792: 97.6562% ( 19) 00:13:23.573 20318.792 - 20424.071: 97.7333% ( 7) 00:13:23.573 20424.071 - 20529.349: 97.8543% ( 11) 00:13:23.573 20529.349 - 20634.628: 97.9864% ( 12) 00:13:23.573 20634.628 - 20739.907: 98.0964% ( 10) 00:13:23.573 20739.907 - 20845.186: 98.1734% ( 7) 00:13:23.573 20845.186 - 20950.464: 98.2614% ( 8) 00:13:23.573 20950.464 - 21055.743: 98.3275% ( 6) 00:13:23.573 21055.743 - 21161.022: 98.4045% ( 7) 00:13:23.573 21161.022 - 21266.300: 98.4705% ( 6) 00:13:23.573 21266.300 - 21371.579: 98.5255% ( 5) 00:13:23.573 21371.579 - 21476.858: 98.5585% ( 3) 00:13:23.573 21476.858 - 21582.137: 98.5915% ( 3) 00:13:23.573 31583.614 - 31794.172: 98.6246% ( 3) 00:13:23.573 31794.172 - 32004.729: 98.7016% ( 7) 00:13:23.573 32004.729 - 32215.287: 98.7676% ( 6) 00:13:23.573 32215.287 - 32425.844: 98.8336% ( 6) 00:13:23.573 32425.844 - 32636.402: 98.9107% ( 7) 00:13:23.573 32636.402 - 32846.959: 98.9767% ( 6) 00:13:23.573 32846.959 - 33057.516: 99.0427% ( 6) 00:13:23.573 33057.516 - 33268.074: 99.1197% ( 7) 00:13:23.573 33268.074 - 33478.631: 99.1857% ( 6) 00:13:23.573 33478.631 - 33689.189: 99.2518% ( 6) 00:13:23.573 33689.189 - 33899.746: 99.2958% ( 4) 00:13:23.573 43164.273 - 43374.831: 99.3178% ( 2) 00:13:23.573 43374.831 - 43585.388: 99.3618% ( 4) 00:13:23.573 43585.388 - 43795.945: 99.4168% ( 5) 00:13:23.573 43795.945 - 44006.503: 99.4608% ( 4) 00:13:23.573 44006.503 - 44217.060: 99.5048% ( 4) 00:13:23.573 44217.060 - 44427.618: 99.5709% ( 6) 00:13:23.573 44427.618 - 44638.175: 99.6259% ( 5) 00:13:23.573 44638.175 - 44848.733: 99.6589% ( 3) 00:13:23.573 44848.733 - 45059.290: 99.7029% ( 4) 00:13:23.573 45059.290 - 45269.847: 99.7579% ( 5) 00:13:23.573 45269.847 - 45480.405: 99.8129% ( 5) 00:13:23.573 45480.405 - 45690.962: 99.8460% ( 3) 00:13:23.573 45690.962 - 45901.520: 99.9010% ( 5) 00:13:23.573 45901.520 - 46112.077: 99.9560% ( 5) 00:13:23.573 46112.077 - 46322.635: 100.0000% ( 4) 00:13:23.573 00:13:23.573 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:13:23.573 ============================================================================== 00:13:23.573 Range in us Cumulative IO count 00:13:23.573 9369.806 - 9422.445: 0.0220% ( 2) 00:13:23.573 9422.445 - 9475.084: 0.0330% ( 1) 00:13:23.573 9475.084 - 9527.724: 0.0550% ( 2) 00:13:23.573 9527.724 - 9580.363: 0.1651% ( 10) 00:13:23.573 9580.363 - 9633.002: 0.3081% ( 13) 00:13:23.573 9633.002 - 9685.642: 0.5612% ( 23) 00:13:23.573 9685.642 - 9738.281: 1.0343% ( 43) 00:13:23.573 9738.281 - 9790.920: 1.4855% ( 41) 00:13:23.573 9790.920 - 9843.560: 2.0246% ( 49) 00:13:23.573 9843.560 - 9896.199: 2.8499% ( 75) 00:13:23.573 9896.199 - 9948.839: 3.5211% ( 61) 00:13:23.573 9948.839 - 10001.478: 4.2143% ( 63) 00:13:23.573 10001.478 - 10054.117: 4.9956% ( 71) 00:13:23.573 10054.117 - 10106.757: 6.0189% ( 93) 00:13:23.573 10106.757 - 10159.396: 7.1853% ( 106) 00:13:23.573 10159.396 - 10212.035: 8.5497% ( 124) 00:13:23.573 10212.035 - 10264.675: 9.9252% ( 125) 00:13:23.573 10264.675 - 10317.314: 11.2896% ( 124) 00:13:23.573 10317.314 - 10369.953: 12.5110% ( 111) 00:13:23.573 10369.953 - 10422.593: 13.3253% ( 74) 00:13:23.573 10422.593 - 10475.232: 14.0955% ( 70) 00:13:23.573 10475.232 - 10527.871: 14.8548% ( 69) 00:13:23.573 10527.871 - 10580.511: 15.5590% ( 64) 00:13:23.573 10580.511 - 10633.150: 16.1862% ( 57) 00:13:23.573 10633.150 - 10685.790: 16.6813% ( 45) 00:13:23.573 10685.790 - 10738.429: 17.2645% ( 53) 00:13:23.573 10738.429 - 10791.068: 17.9577% ( 63) 00:13:23.573 10791.068 - 10843.708: 18.9481% ( 90) 00:13:23.573 10843.708 - 10896.347: 19.9494% ( 91) 00:13:23.573 10896.347 - 10948.986: 20.9507% ( 91) 00:13:23.573 10948.986 - 11001.626: 21.8640% ( 83) 00:13:23.573 11001.626 - 11054.265: 22.8873% ( 93) 00:13:23.573 11054.265 - 11106.904: 23.9107% ( 93) 00:13:23.573 11106.904 - 11159.544: 24.8349% ( 84) 00:13:23.573 11159.544 - 11212.183: 25.5722% ( 67) 00:13:23.573 11212.183 - 11264.822: 26.3974% ( 75) 00:13:23.573 11264.822 - 11317.462: 26.9696% ( 52) 00:13:23.573 11317.462 - 11370.101: 27.3878% ( 38) 00:13:23.573 11370.101 - 11422.741: 27.9820% ( 54) 00:13:23.573 11422.741 - 11475.380: 28.4111% ( 39) 00:13:23.573 11475.380 - 11528.019: 29.0163% ( 55) 00:13:23.573 11528.019 - 11580.659: 29.4894% ( 43) 00:13:23.573 11580.659 - 11633.298: 29.9186% ( 39) 00:13:23.573 11633.298 - 11685.937: 30.3257% ( 37) 00:13:23.573 11685.937 - 11738.577: 30.7328% ( 37) 00:13:23.573 11738.577 - 11791.216: 31.0960% ( 33) 00:13:23.573 11791.216 - 11843.855: 31.6571% ( 51) 00:13:23.573 11843.855 - 11896.495: 32.2293% ( 52) 00:13:23.573 11896.495 - 11949.134: 32.6364% ( 37) 00:13:23.573 11949.134 - 12001.773: 32.9115% ( 25) 00:13:23.573 12001.773 - 12054.413: 33.2526% ( 31) 00:13:23.573 12054.413 - 12107.052: 33.6928% ( 40) 00:13:23.573 12107.052 - 12159.692: 33.9789% ( 26) 00:13:23.573 12159.692 - 12212.331: 34.2430% ( 24) 00:13:23.573 12212.331 - 12264.970: 34.5401% ( 27) 00:13:23.573 12264.970 - 12317.610: 35.1673% ( 57) 00:13:23.573 12317.610 - 12370.249: 35.7394% ( 52) 00:13:23.573 12370.249 - 12422.888: 36.3116% ( 52) 00:13:23.573 12422.888 - 12475.528: 36.6967% ( 35) 00:13:23.573 12475.528 - 12528.167: 37.0929% ( 36) 00:13:23.573 12528.167 - 12580.806: 37.4450% ( 32) 00:13:23.573 12580.806 - 12633.446: 37.8411% ( 36) 00:13:23.573 12633.446 - 12686.085: 38.4133% ( 52) 00:13:23.573 12686.085 - 12738.724: 39.0515% ( 58) 00:13:23.573 12738.724 - 12791.364: 39.7557% ( 64) 00:13:23.573 12791.364 - 12844.003: 40.4489% ( 63) 00:13:23.573 12844.003 - 12896.643: 41.0321% ( 53) 00:13:23.573 12896.643 - 12949.282: 41.6703% ( 58) 00:13:23.573 12949.282 - 13001.921: 42.4516% ( 71) 00:13:23.573 13001.921 - 13054.561: 43.2438% ( 72) 00:13:23.573 13054.561 - 13107.200: 44.2562% ( 92) 00:13:23.573 13107.200 - 13159.839: 44.9384% ( 62) 00:13:23.573 13159.839 - 13212.479: 45.5986% ( 60) 00:13:23.573 13212.479 - 13265.118: 46.2368% ( 58) 00:13:23.573 13265.118 - 13317.757: 46.9630% ( 66) 00:13:23.573 13317.757 - 13370.397: 47.7223% ( 69) 00:13:23.573 13370.397 - 13423.036: 48.4815% ( 69) 00:13:23.573 13423.036 - 13475.676: 49.1747% ( 63) 00:13:23.573 13475.676 - 13580.954: 50.6932% ( 138) 00:13:23.573 13580.954 - 13686.233: 51.8596% ( 106) 00:13:23.573 13686.233 - 13791.512: 52.8829% ( 93) 00:13:23.573 13791.512 - 13896.790: 53.7412% ( 78) 00:13:23.573 13896.790 - 14002.069: 54.6875% ( 86) 00:13:23.573 14002.069 - 14107.348: 55.6668% ( 89) 00:13:23.573 14107.348 - 14212.627: 56.6351% ( 88) 00:13:23.573 14212.627 - 14317.905: 57.7355% ( 100) 00:13:23.573 14317.905 - 14423.184: 58.7148% ( 89) 00:13:23.573 14423.184 - 14528.463: 59.4740% ( 69) 00:13:23.573 14528.463 - 14633.741: 60.0792% ( 55) 00:13:23.573 14633.741 - 14739.020: 60.7614% ( 62) 00:13:23.573 14739.020 - 14844.299: 61.7408% ( 89) 00:13:23.573 14844.299 - 14949.578: 62.7421% ( 91) 00:13:23.573 14949.578 - 15054.856: 63.6554% ( 83) 00:13:23.573 15054.856 - 15160.135: 64.4586% ( 73) 00:13:23.573 15160.135 - 15265.414: 65.7350% ( 116) 00:13:23.573 15265.414 - 15370.692: 67.4516% ( 156) 00:13:23.573 15370.692 - 15475.971: 68.9591% ( 137) 00:13:23.573 15475.971 - 15581.250: 69.8834% ( 84) 00:13:23.573 15581.250 - 15686.529: 70.9727% ( 99) 00:13:23.573 15686.529 - 15791.807: 72.0180% ( 95) 00:13:23.573 15791.807 - 15897.086: 72.9754% ( 87) 00:13:23.573 15897.086 - 16002.365: 74.1417% ( 106) 00:13:23.573 16002.365 - 16107.643: 75.1981% ( 96) 00:13:23.573 16107.643 - 16212.922: 76.1004% ( 82) 00:13:23.573 16212.922 - 16318.201: 76.8486% ( 68) 00:13:23.573 16318.201 - 16423.480: 77.6629% ( 74) 00:13:23.573 16423.480 - 16528.758: 78.4661% ( 73) 00:13:23.573 16528.758 - 16634.037: 79.2804% ( 74) 00:13:23.573 16634.037 - 16739.316: 80.0396% ( 69) 00:13:23.573 16739.316 - 16844.594: 80.7438% ( 64) 00:13:23.573 16844.594 - 16949.873: 81.7011% ( 87) 00:13:23.573 16949.873 - 17055.152: 82.8015% ( 100) 00:13:23.573 17055.152 - 17160.431: 83.6158% ( 74) 00:13:23.573 17160.431 - 17265.709: 84.4410% ( 75) 00:13:23.573 17265.709 - 17370.988: 85.3433% ( 82) 00:13:23.573 17370.988 - 17476.267: 86.1136% ( 70) 00:13:23.573 17476.267 - 17581.545: 86.8618% ( 68) 00:13:23.573 17581.545 - 17686.824: 87.6100% ( 68) 00:13:23.573 17686.824 - 17792.103: 88.5233% ( 83) 00:13:23.573 17792.103 - 17897.382: 89.1285% ( 55) 00:13:23.573 17897.382 - 18002.660: 89.7117% ( 53) 00:13:23.573 18002.660 - 18107.939: 90.3609% ( 59) 00:13:23.573 18107.939 - 18213.218: 90.9661% ( 55) 00:13:23.573 18213.218 - 18318.496: 91.6373% ( 61) 00:13:23.573 18318.496 - 18423.775: 92.2975% ( 60) 00:13:23.573 18423.775 - 18529.054: 93.1558% ( 78) 00:13:23.573 18529.054 - 18634.333: 93.7500% ( 54) 00:13:23.573 18634.333 - 18739.611: 94.3882% ( 58) 00:13:23.573 18739.611 - 18844.890: 94.8834% ( 45) 00:13:23.573 18844.890 - 18950.169: 95.2025% ( 29) 00:13:23.573 18950.169 - 19055.447: 95.5106% ( 28) 00:13:23.573 19055.447 - 19160.726: 95.7857% ( 25) 00:13:23.573 19160.726 - 19266.005: 96.0497% ( 24) 00:13:23.573 19266.005 - 19371.284: 96.2478% ( 18) 00:13:23.573 19371.284 - 19476.562: 96.3908% ( 13) 00:13:23.573 19476.562 - 19581.841: 96.5999% ( 19) 00:13:23.573 19581.841 - 19687.120: 96.8860% ( 26) 00:13:23.573 19687.120 - 19792.398: 97.0180% ( 12) 00:13:23.573 19792.398 - 19897.677: 97.1281% ( 10) 00:13:23.573 19897.677 - 20002.956: 97.2381% ( 10) 00:13:23.573 20002.956 - 20108.235: 97.3592% ( 11) 00:13:23.573 20108.235 - 20213.513: 97.4912% ( 12) 00:13:23.573 20213.513 - 20318.792: 97.5682% ( 7) 00:13:23.573 20318.792 - 20424.071: 97.6232% ( 5) 00:13:23.573 20424.071 - 20529.349: 97.7333% ( 10) 00:13:23.573 20529.349 - 20634.628: 97.8433% ( 10) 00:13:23.573 20634.628 - 20739.907: 97.9093% ( 6) 00:13:23.573 20739.907 - 20845.186: 97.9974% ( 8) 00:13:23.573 20845.186 - 20950.464: 98.0744% ( 7) 00:13:23.573 20950.464 - 21055.743: 98.1624% ( 8) 00:13:23.573 21055.743 - 21161.022: 98.2394% ( 7) 00:13:23.573 21161.022 - 21266.300: 98.2835% ( 4) 00:13:23.573 21266.300 - 21371.579: 98.3275% ( 4) 00:13:23.573 21371.579 - 21476.858: 98.3715% ( 4) 00:13:23.573 21476.858 - 21582.137: 98.4155% ( 4) 00:13:23.573 21582.137 - 21687.415: 98.4595% ( 4) 00:13:23.573 21687.415 - 21792.694: 98.5035% ( 4) 00:13:23.573 21792.694 - 21897.973: 98.5475% ( 4) 00:13:23.573 21897.973 - 22003.251: 98.5915% ( 4) 00:13:23.573 31583.614 - 31794.172: 98.6246% ( 3) 00:13:23.573 31794.172 - 32004.729: 98.7016% ( 7) 00:13:23.573 32004.729 - 32215.287: 98.7786% ( 7) 00:13:23.573 32215.287 - 32425.844: 98.8556% ( 7) 00:13:23.573 32425.844 - 32636.402: 98.9327% ( 7) 00:13:23.573 32636.402 - 32846.959: 99.0097% ( 7) 00:13:23.573 32846.959 - 33057.516: 99.0757% ( 6) 00:13:23.573 33057.516 - 33268.074: 99.1637% ( 8) 00:13:23.573 33268.074 - 33478.631: 99.2298% ( 6) 00:13:23.573 33478.631 - 33689.189: 99.2958% ( 6) 00:13:23.573 42111.486 - 42322.043: 99.3178% ( 2) 00:13:23.573 42322.043 - 42532.601: 99.3618% ( 4) 00:13:23.573 42532.601 - 42743.158: 99.4278% ( 6) 00:13:23.573 42743.158 - 42953.716: 99.4828% ( 5) 00:13:23.573 42953.716 - 43164.273: 99.5379% ( 5) 00:13:23.573 43164.273 - 43374.831: 99.5929% ( 5) 00:13:23.573 43374.831 - 43585.388: 99.6589% ( 6) 00:13:23.573 43585.388 - 43795.945: 99.7139% ( 5) 00:13:23.573 43795.945 - 44006.503: 99.7689% ( 5) 00:13:23.573 44006.503 - 44217.060: 99.8239% ( 5) 00:13:23.573 44217.060 - 44427.618: 99.8680% ( 4) 00:13:23.573 44427.618 - 44638.175: 99.9230% ( 5) 00:13:23.573 44638.175 - 44848.733: 99.9890% ( 6) 00:13:23.573 44848.733 - 45059.290: 100.0000% ( 1) 00:13:23.573 00:13:23.573 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:13:23.573 ============================================================================== 00:13:23.573 Range in us Cumulative IO count 00:13:23.573 9422.445 - 9475.084: 0.0220% ( 2) 00:13:23.573 9475.084 - 9527.724: 0.0990% ( 7) 00:13:23.573 9527.724 - 9580.363: 0.1761% ( 7) 00:13:23.573 9580.363 - 9633.002: 0.2971% ( 11) 00:13:23.573 9633.002 - 9685.642: 0.5502% ( 23) 00:13:23.573 9685.642 - 9738.281: 0.7923% ( 22) 00:13:23.573 9738.281 - 9790.920: 1.1334% ( 31) 00:13:23.573 9790.920 - 9843.560: 1.5845% ( 41) 00:13:23.573 9843.560 - 9896.199: 2.1017% ( 47) 00:13:23.573 9896.199 - 9948.839: 2.8389% ( 67) 00:13:23.573 9948.839 - 10001.478: 3.7302% ( 81) 00:13:23.573 10001.478 - 10054.117: 4.5775% ( 77) 00:13:23.573 10054.117 - 10106.757: 5.6888% ( 101) 00:13:23.573 10106.757 - 10159.396: 6.8882% ( 109) 00:13:23.573 10159.396 - 10212.035: 8.2416% ( 123) 00:13:23.573 10212.035 - 10264.675: 9.1439% ( 82) 00:13:23.573 10264.675 - 10317.314: 10.0462% ( 82) 00:13:23.573 10317.314 - 10369.953: 11.1686% ( 102) 00:13:23.573 10369.953 - 10422.593: 11.9938% ( 75) 00:13:23.573 10422.593 - 10475.232: 12.9071% ( 83) 00:13:23.573 10475.232 - 10527.871: 14.0405% ( 103) 00:13:23.573 10527.871 - 10580.511: 14.8217% ( 71) 00:13:23.573 10580.511 - 10633.150: 15.6030% ( 71) 00:13:23.573 10633.150 - 10685.790: 16.5713% ( 88) 00:13:23.573 10685.790 - 10738.429: 17.3195% ( 68) 00:13:23.573 10738.429 - 10791.068: 18.2218% ( 82) 00:13:23.573 10791.068 - 10843.708: 18.9921% ( 70) 00:13:23.573 10843.708 - 10896.347: 19.7403% ( 68) 00:13:23.573 10896.347 - 10948.986: 20.8737% ( 103) 00:13:23.573 10948.986 - 11001.626: 21.6769% ( 73) 00:13:23.573 11001.626 - 11054.265: 22.6893% ( 92) 00:13:23.573 11054.265 - 11106.904: 23.3275% ( 58) 00:13:23.573 11106.904 - 11159.544: 24.0537% ( 66) 00:13:23.573 11159.544 - 11212.183: 24.9560% ( 82) 00:13:23.573 11212.183 - 11264.822: 25.5282% ( 52) 00:13:23.573 11264.822 - 11317.462: 26.2214% ( 63) 00:13:23.573 11317.462 - 11370.101: 26.8926% ( 61) 00:13:23.573 11370.101 - 11422.741: 27.5528% ( 60) 00:13:23.573 11422.741 - 11475.380: 28.1690% ( 56) 00:13:23.573 11475.380 - 11528.019: 28.9833% ( 74) 00:13:23.573 11528.019 - 11580.659: 29.4674% ( 44) 00:13:23.573 11580.659 - 11633.298: 29.8636% ( 36) 00:13:23.573 11633.298 - 11685.937: 30.2377% ( 34) 00:13:23.573 11685.937 - 11738.577: 30.5238% ( 26) 00:13:23.573 11738.577 - 11791.216: 30.8539% ( 30) 00:13:23.573 11791.216 - 11843.855: 31.2280% ( 34) 00:13:23.573 11843.855 - 11896.495: 31.5471% ( 29) 00:13:23.573 11896.495 - 11949.134: 31.9432% ( 36) 00:13:23.573 11949.134 - 12001.773: 32.3614% ( 38) 00:13:23.573 12001.773 - 12054.413: 32.7575% ( 36) 00:13:23.573 12054.413 - 12107.052: 33.1426% ( 35) 00:13:23.573 12107.052 - 12159.692: 33.5387% ( 36) 00:13:23.573 12159.692 - 12212.331: 33.9679% ( 39) 00:13:23.573 12212.331 - 12264.970: 34.3750% ( 37) 00:13:23.573 12264.970 - 12317.610: 34.6171% ( 22) 00:13:23.573 12317.610 - 12370.249: 34.9362% ( 29) 00:13:23.573 12370.249 - 12422.888: 35.4313% ( 45) 00:13:23.573 12422.888 - 12475.528: 36.0475% ( 56) 00:13:23.573 12475.528 - 12528.167: 36.5867% ( 49) 00:13:23.573 12528.167 - 12580.806: 37.0048% ( 38) 00:13:23.573 12580.806 - 12633.446: 37.5550% ( 50) 00:13:23.573 12633.446 - 12686.085: 37.9621% ( 37) 00:13:23.573 12686.085 - 12738.724: 38.4683% ( 46) 00:13:23.573 12738.724 - 12791.364: 39.2055% ( 67) 00:13:23.573 12791.364 - 12844.003: 39.9648% ( 69) 00:13:23.573 12844.003 - 12896.643: 40.5480% ( 53) 00:13:23.573 12896.643 - 12949.282: 41.3732% ( 75) 00:13:23.573 12949.282 - 13001.921: 42.1325% ( 69) 00:13:23.573 13001.921 - 13054.561: 42.8697% ( 67) 00:13:23.573 13054.561 - 13107.200: 43.7170% ( 77) 00:13:23.573 13107.200 - 13159.839: 44.3772% ( 60) 00:13:23.573 13159.839 - 13212.479: 45.0374% ( 60) 00:13:23.573 13212.479 - 13265.118: 45.6976% ( 60) 00:13:23.573 13265.118 - 13317.757: 46.4129% ( 65) 00:13:23.573 13317.757 - 13370.397: 47.0401% ( 57) 00:13:23.574 13370.397 - 13423.036: 47.4582% ( 38) 00:13:23.574 13423.036 - 13475.676: 47.8433% ( 35) 00:13:23.574 13475.676 - 13580.954: 48.9327% ( 99) 00:13:23.574 13580.954 - 13686.233: 49.8460% ( 83) 00:13:23.574 13686.233 - 13791.512: 50.7812% ( 85) 00:13:23.574 13791.512 - 13896.790: 51.9916% ( 110) 00:13:23.574 13896.790 - 14002.069: 52.9820% ( 90) 00:13:23.574 14002.069 - 14107.348: 54.0603% ( 98) 00:13:23.574 14107.348 - 14212.627: 55.0286% ( 88) 00:13:23.574 14212.627 - 14317.905: 56.0629% ( 94) 00:13:23.574 14317.905 - 14423.184: 57.0973% ( 94) 00:13:23.574 14423.184 - 14528.463: 57.9225% ( 75) 00:13:23.574 14528.463 - 14633.741: 59.2099% ( 117) 00:13:23.574 14633.741 - 14739.020: 60.3763% ( 106) 00:13:23.574 14739.020 - 14844.299: 61.8508% ( 134) 00:13:23.574 14844.299 - 14949.578: 63.3913% ( 140) 00:13:23.574 14949.578 - 15054.856: 65.0198% ( 148) 00:13:23.574 15054.856 - 15160.135: 66.4503% ( 130) 00:13:23.574 15160.135 - 15265.414: 67.8367% ( 126) 00:13:23.574 15265.414 - 15370.692: 69.0251% ( 108) 00:13:23.574 15370.692 - 15475.971: 70.4335% ( 128) 00:13:23.574 15475.971 - 15581.250: 71.6769% ( 113) 00:13:23.574 15581.250 - 15686.529: 72.7443% ( 97) 00:13:23.574 15686.529 - 15791.807: 73.4925% ( 68) 00:13:23.574 15791.807 - 15897.086: 74.2628% ( 70) 00:13:23.574 15897.086 - 16002.365: 75.0770% ( 74) 00:13:23.574 16002.365 - 16107.643: 76.1554% ( 98) 00:13:23.574 16107.643 - 16212.922: 77.1237% ( 88) 00:13:23.574 16212.922 - 16318.201: 78.1360% ( 92) 00:13:23.574 16318.201 - 16423.480: 79.2694% ( 103) 00:13:23.574 16423.480 - 16528.758: 80.0506% ( 71) 00:13:23.574 16528.758 - 16634.037: 80.6778% ( 57) 00:13:23.574 16634.037 - 16739.316: 81.4481% ( 70) 00:13:23.574 16739.316 - 16844.594: 82.3283% ( 80) 00:13:23.574 16844.594 - 16949.873: 83.4287% ( 100) 00:13:23.574 16949.873 - 17055.152: 84.3640% ( 85) 00:13:23.574 17055.152 - 17160.431: 85.1673% ( 73) 00:13:23.574 17160.431 - 17265.709: 85.9485% ( 71) 00:13:23.574 17265.709 - 17370.988: 86.7188% ( 70) 00:13:23.574 17370.988 - 17476.267: 87.3790% ( 60) 00:13:23.574 17476.267 - 17581.545: 88.1052% ( 66) 00:13:23.574 17581.545 - 17686.824: 89.0405% ( 85) 00:13:23.574 17686.824 - 17792.103: 89.7117% ( 61) 00:13:23.574 17792.103 - 17897.382: 90.2289% ( 47) 00:13:23.574 17897.382 - 18002.660: 90.6910% ( 42) 00:13:23.574 18002.660 - 18107.939: 91.3292% ( 58) 00:13:23.574 18107.939 - 18213.218: 91.9674% ( 58) 00:13:23.574 18213.218 - 18318.496: 92.5726% ( 55) 00:13:23.574 18318.496 - 18423.775: 93.0458% ( 43) 00:13:23.574 18423.775 - 18529.054: 93.4309% ( 35) 00:13:23.574 18529.054 - 18634.333: 93.7280% ( 27) 00:13:23.574 18634.333 - 18739.611: 93.9811% ( 23) 00:13:23.574 18739.611 - 18844.890: 94.2121% ( 21) 00:13:23.574 18844.890 - 18950.169: 94.5643% ( 32) 00:13:23.574 18950.169 - 19055.447: 94.9934% ( 39) 00:13:23.574 19055.447 - 19160.726: 95.4996% ( 46) 00:13:23.574 19160.726 - 19266.005: 96.0167% ( 47) 00:13:23.574 19266.005 - 19371.284: 96.3908% ( 34) 00:13:23.574 19371.284 - 19476.562: 96.6219% ( 21) 00:13:23.574 19476.562 - 19581.841: 96.8530% ( 21) 00:13:23.574 19581.841 - 19687.120: 97.0180% ( 15) 00:13:23.574 19687.120 - 19792.398: 97.1281% ( 10) 00:13:23.574 19792.398 - 19897.677: 97.1941% ( 6) 00:13:23.574 19897.677 - 20002.956: 97.2711% ( 7) 00:13:23.574 20002.956 - 20108.235: 97.3371% ( 6) 00:13:23.574 20108.235 - 20213.513: 97.3812% ( 4) 00:13:23.574 20213.513 - 20318.792: 97.4252% ( 4) 00:13:23.574 20318.792 - 20424.071: 97.4802% ( 5) 00:13:23.574 20424.071 - 20529.349: 97.5682% ( 8) 00:13:23.574 20529.349 - 20634.628: 97.6452% ( 7) 00:13:23.574 20634.628 - 20739.907: 97.7223% ( 7) 00:13:23.574 20739.907 - 20845.186: 97.8103% ( 8) 00:13:23.574 20845.186 - 20950.464: 97.8873% ( 7) 00:13:23.574 20950.464 - 21055.743: 97.9643% ( 7) 00:13:23.574 21055.743 - 21161.022: 98.0634% ( 9) 00:13:23.574 21161.022 - 21266.300: 98.1404% ( 7) 00:13:23.574 21266.300 - 21371.579: 98.2174% ( 7) 00:13:23.574 21371.579 - 21476.858: 98.3055% ( 8) 00:13:23.574 21476.858 - 21582.137: 98.3715% ( 6) 00:13:23.574 21582.137 - 21687.415: 98.4705% ( 9) 00:13:23.574 21687.415 - 21792.694: 98.5475% ( 7) 00:13:23.574 21792.694 - 21897.973: 98.5915% ( 4) 00:13:23.574 30951.942 - 31162.500: 98.6356% ( 4) 00:13:23.574 31162.500 - 31373.057: 98.7016% ( 6) 00:13:23.574 31373.057 - 31583.614: 98.7456% ( 4) 00:13:23.574 31583.614 - 31794.172: 98.8006% ( 5) 00:13:23.574 31794.172 - 32004.729: 98.8666% ( 6) 00:13:23.574 32004.729 - 32215.287: 98.9107% ( 4) 00:13:23.574 32215.287 - 32425.844: 98.9767% ( 6) 00:13:23.574 32425.844 - 32636.402: 99.0317% ( 5) 00:13:23.574 32636.402 - 32846.959: 99.0867% ( 5) 00:13:23.574 32846.959 - 33057.516: 99.1417% ( 5) 00:13:23.574 33057.516 - 33268.074: 99.1967% ( 5) 00:13:23.574 33268.074 - 33478.631: 99.2518% ( 5) 00:13:23.574 33478.631 - 33689.189: 99.2958% ( 4) 00:13:23.574 41269.256 - 41479.814: 99.3398% ( 4) 00:13:23.574 41479.814 - 41690.371: 99.3948% ( 5) 00:13:23.574 41690.371 - 41900.929: 99.4388% ( 4) 00:13:23.574 41900.929 - 42111.486: 99.4828% ( 4) 00:13:23.574 42111.486 - 42322.043: 99.5489% ( 6) 00:13:23.574 42322.043 - 42532.601: 99.6039% ( 5) 00:13:23.574 42532.601 - 42743.158: 99.6589% ( 5) 00:13:23.574 42743.158 - 42953.716: 99.7249% ( 6) 00:13:23.574 42953.716 - 43164.273: 99.7799% ( 5) 00:13:23.574 43164.273 - 43374.831: 99.8349% ( 5) 00:13:23.574 43374.831 - 43585.388: 99.8900% ( 5) 00:13:23.574 43585.388 - 43795.945: 99.9450% ( 5) 00:13:23.574 43795.945 - 44006.503: 100.0000% ( 5) 00:13:23.574 00:13:23.574 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:13:23.574 ============================================================================== 00:13:23.574 Range in us Cumulative IO count 00:13:23.574 9264.527 - 9317.166: 0.0770% ( 7) 00:13:23.574 9317.166 - 9369.806: 0.1430% ( 6) 00:13:23.574 9369.806 - 9422.445: 0.2201% ( 7) 00:13:23.574 9422.445 - 9475.084: 0.3631% ( 13) 00:13:23.574 9475.084 - 9527.724: 0.5172% ( 14) 00:13:23.574 9527.724 - 9580.363: 0.5832% ( 6) 00:13:23.574 9580.363 - 9633.002: 0.6382% ( 5) 00:13:23.574 9633.002 - 9685.642: 0.7372% ( 9) 00:13:23.574 9685.642 - 9738.281: 0.9463% ( 19) 00:13:23.574 9738.281 - 9790.920: 1.2214% ( 25) 00:13:23.574 9790.920 - 9843.560: 1.6175% ( 36) 00:13:23.574 9843.560 - 9896.199: 2.2117% ( 54) 00:13:23.574 9896.199 - 9948.839: 2.8389% ( 57) 00:13:23.574 9948.839 - 10001.478: 3.5651% ( 66) 00:13:23.574 10001.478 - 10054.117: 4.7865% ( 111) 00:13:23.574 10054.117 - 10106.757: 5.6558% ( 79) 00:13:23.574 10106.757 - 10159.396: 6.4591% ( 73) 00:13:23.574 10159.396 - 10212.035: 7.2953% ( 76) 00:13:23.574 10212.035 - 10264.675: 8.2306% ( 85) 00:13:23.574 10264.675 - 10317.314: 9.5070% ( 116) 00:13:23.574 10317.314 - 10369.953: 11.0585% ( 141) 00:13:23.574 10369.953 - 10422.593: 12.6651% ( 146) 00:13:23.574 10422.593 - 10475.232: 13.7654% ( 100) 00:13:23.574 10475.232 - 10527.871: 14.7227% ( 87) 00:13:23.574 10527.871 - 10580.511: 15.4820% ( 69) 00:13:23.574 10580.511 - 10633.150: 16.2852% ( 73) 00:13:23.574 10633.150 - 10685.790: 16.9894% ( 64) 00:13:23.574 10685.790 - 10738.429: 17.7267% ( 67) 00:13:23.574 10738.429 - 10791.068: 18.5849% ( 78) 00:13:23.574 10791.068 - 10843.708: 19.2892% ( 64) 00:13:23.574 10843.708 - 10896.347: 20.1585% ( 79) 00:13:23.574 10896.347 - 10948.986: 20.7086% ( 50) 00:13:23.574 10948.986 - 11001.626: 21.2258% ( 47) 00:13:23.574 11001.626 - 11054.265: 21.9300% ( 64) 00:13:23.574 11054.265 - 11106.904: 22.4032% ( 43) 00:13:23.574 11106.904 - 11159.544: 22.8873% ( 44) 00:13:23.574 11159.544 - 11212.183: 23.6686% ( 71) 00:13:23.574 11212.183 - 11264.822: 24.2958% ( 57) 00:13:23.574 11264.822 - 11317.462: 25.1210% ( 75) 00:13:23.574 11317.462 - 11370.101: 26.2104% ( 99) 00:13:23.574 11370.101 - 11422.741: 26.8376% ( 57) 00:13:23.574 11422.741 - 11475.380: 27.2777% ( 40) 00:13:23.574 11475.380 - 11528.019: 27.6959% ( 38) 00:13:23.574 11528.019 - 11580.659: 28.0700% ( 34) 00:13:23.574 11580.659 - 11633.298: 28.4221% ( 32) 00:13:23.574 11633.298 - 11685.937: 28.8622% ( 40) 00:13:23.574 11685.937 - 11738.577: 29.4234% ( 51) 00:13:23.574 11738.577 - 11791.216: 30.0836% ( 60) 00:13:23.574 11791.216 - 11843.855: 30.6668% ( 53) 00:13:23.574 11843.855 - 11896.495: 31.1510% ( 44) 00:13:23.574 11896.495 - 11949.134: 31.6021% ( 41) 00:13:23.574 11949.134 - 12001.773: 32.0533% ( 41) 00:13:23.574 12001.773 - 12054.413: 32.5704% ( 47) 00:13:23.574 12054.413 - 12107.052: 32.9886% ( 38) 00:13:23.574 12107.052 - 12159.692: 33.4507% ( 42) 00:13:23.574 12159.692 - 12212.331: 33.8578% ( 37) 00:13:23.574 12212.331 - 12264.970: 34.4190% ( 51) 00:13:23.574 12264.970 - 12317.610: 35.0572% ( 58) 00:13:23.574 12317.610 - 12370.249: 35.6514% ( 54) 00:13:23.574 12370.249 - 12422.888: 36.2456% ( 54) 00:13:23.574 12422.888 - 12475.528: 36.8068% ( 51) 00:13:23.574 12475.528 - 12528.167: 37.3790% ( 52) 00:13:23.574 12528.167 - 12580.806: 38.0392% ( 60) 00:13:23.574 12580.806 - 12633.446: 38.5783% ( 49) 00:13:23.574 12633.446 - 12686.085: 39.2055% ( 57) 00:13:23.574 12686.085 - 12738.724: 39.8217% ( 56) 00:13:23.574 12738.724 - 12791.364: 40.4379% ( 56) 00:13:23.574 12791.364 - 12844.003: 41.0761% ( 58) 00:13:23.574 12844.003 - 12896.643: 41.6043% ( 48) 00:13:23.574 12896.643 - 12949.282: 42.3526% ( 68) 00:13:23.574 12949.282 - 13001.921: 43.0018% ( 59) 00:13:23.574 13001.921 - 13054.561: 43.7170% ( 65) 00:13:23.574 13054.561 - 13107.200: 44.4432% ( 66) 00:13:23.574 13107.200 - 13159.839: 44.9714% ( 48) 00:13:23.574 13159.839 - 13212.479: 45.5326% ( 51) 00:13:23.574 13212.479 - 13265.118: 46.0057% ( 43) 00:13:23.574 13265.118 - 13317.757: 46.4569% ( 41) 00:13:23.574 13317.757 - 13370.397: 46.9520% ( 45) 00:13:23.574 13370.397 - 13423.036: 47.7113% ( 69) 00:13:23.574 13423.036 - 13475.676: 48.2945% ( 53) 00:13:23.574 13475.676 - 13580.954: 49.5048% ( 110) 00:13:23.574 13580.954 - 13686.233: 50.4621% ( 87) 00:13:23.574 13686.233 - 13791.512: 51.2104% ( 68) 00:13:23.574 13791.512 - 13896.790: 51.9476% ( 67) 00:13:23.574 13896.790 - 14002.069: 53.1910% ( 113) 00:13:23.574 14002.069 - 14107.348: 54.0603% ( 79) 00:13:23.574 14107.348 - 14212.627: 54.9846% ( 84) 00:13:23.574 14212.627 - 14317.905: 55.9309% ( 86) 00:13:23.574 14317.905 - 14423.184: 56.7892% ( 78) 00:13:23.574 14423.184 - 14528.463: 57.7025% ( 83) 00:13:23.574 14528.463 - 14633.741: 58.9569% ( 114) 00:13:23.574 14633.741 - 14739.020: 60.2883% ( 121) 00:13:23.574 14739.020 - 14844.299: 61.6087% ( 120) 00:13:23.574 14844.299 - 14949.578: 62.8961% ( 117) 00:13:23.574 14949.578 - 15054.856: 64.4696% ( 143) 00:13:23.574 15054.856 - 15160.135: 66.0761% ( 146) 00:13:23.574 15160.135 - 15265.414: 67.4076% ( 121) 00:13:23.574 15265.414 - 15370.692: 68.7610% ( 123) 00:13:23.574 15370.692 - 15475.971: 70.0814% ( 120) 00:13:23.574 15475.971 - 15581.250: 71.3578% ( 116) 00:13:23.574 15581.250 - 15686.529: 72.9754% ( 147) 00:13:23.574 15686.529 - 15791.807: 73.9877% ( 92) 00:13:23.574 15791.807 - 15897.086: 74.7909% ( 73) 00:13:23.574 15897.086 - 16002.365: 75.6492% ( 78) 00:13:23.574 16002.365 - 16107.643: 76.3314% ( 62) 00:13:23.574 16107.643 - 16212.922: 76.9366% ( 55) 00:13:23.574 16212.922 - 16318.201: 77.5418% ( 55) 00:13:23.574 16318.201 - 16423.480: 78.1910% ( 59) 00:13:23.574 16423.480 - 16528.758: 78.9283% ( 67) 00:13:23.574 16528.758 - 16634.037: 80.0286% ( 100) 00:13:23.574 16634.037 - 16739.316: 81.1730% ( 104) 00:13:23.574 16739.316 - 16844.594: 82.0312% ( 78) 00:13:23.574 16844.594 - 16949.873: 83.0436% ( 92) 00:13:23.574 16949.873 - 17055.152: 84.1219% ( 98) 00:13:23.574 17055.152 - 17160.431: 84.9582% ( 76) 00:13:23.574 17160.431 - 17265.709: 85.8385% ( 80) 00:13:23.574 17265.709 - 17370.988: 87.0268% ( 108) 00:13:23.574 17370.988 - 17476.267: 87.8961% ( 79) 00:13:23.574 17476.267 - 17581.545: 88.8314% ( 85) 00:13:23.574 17581.545 - 17686.824: 89.4696% ( 58) 00:13:23.574 17686.824 - 17792.103: 90.0528% ( 53) 00:13:23.574 17792.103 - 17897.382: 90.6910% ( 58) 00:13:23.574 17897.382 - 18002.660: 91.2962% ( 55) 00:13:23.574 18002.660 - 18107.939: 91.6483% ( 32) 00:13:23.574 18107.939 - 18213.218: 91.9344% ( 26) 00:13:23.574 18213.218 - 18318.496: 92.2645% ( 30) 00:13:23.574 18318.496 - 18423.775: 92.7377% ( 43) 00:13:23.574 18423.775 - 18529.054: 93.1998% ( 42) 00:13:23.574 18529.054 - 18634.333: 93.6400% ( 40) 00:13:23.574 18634.333 - 18739.611: 93.9371% ( 27) 00:13:23.574 18739.611 - 18844.890: 94.1791% ( 22) 00:13:23.574 18844.890 - 18950.169: 94.3662% ( 17) 00:13:23.574 18950.169 - 19055.447: 94.5753% ( 19) 00:13:23.574 19055.447 - 19160.726: 94.7733% ( 18) 00:13:23.574 19160.726 - 19266.005: 95.0814% ( 28) 00:13:23.574 19266.005 - 19371.284: 95.4996% ( 38) 00:13:23.574 19371.284 - 19476.562: 95.8737% ( 34) 00:13:23.574 19476.562 - 19581.841: 96.2368% ( 33) 00:13:23.574 19581.841 - 19687.120: 96.4899% ( 23) 00:13:23.574 19687.120 - 19792.398: 96.6989% ( 19) 00:13:23.574 19792.398 - 19897.677: 96.8640% ( 15) 00:13:23.574 19897.677 - 20002.956: 97.0731% ( 19) 00:13:23.574 20002.956 - 20108.235: 97.3041% ( 21) 00:13:23.574 20108.235 - 20213.513: 97.4692% ( 15) 00:13:23.574 20213.513 - 20318.792: 97.5902% ( 11) 00:13:23.574 20318.792 - 20424.071: 97.7113% ( 11) 00:13:23.574 20424.071 - 20529.349: 97.8433% ( 12) 00:13:23.574 20529.349 - 20634.628: 97.9533% ( 10) 00:13:23.574 20634.628 - 20739.907: 98.0414% ( 8) 00:13:23.574 20739.907 - 20845.186: 98.1184% ( 7) 00:13:23.574 20845.186 - 20950.464: 98.1954% ( 7) 00:13:23.574 20950.464 - 21055.743: 98.2835% ( 8) 00:13:23.574 21055.743 - 21161.022: 98.3715% ( 8) 00:13:23.574 21161.022 - 21266.300: 98.4595% ( 8) 00:13:23.574 21266.300 - 21371.579: 98.5475% ( 8) 00:13:23.574 21371.579 - 21476.858: 98.5915% ( 4) 00:13:23.574 29688.598 - 29899.155: 98.6356% ( 4) 00:13:23.574 29899.155 - 30109.712: 98.7456% ( 10) 00:13:23.574 30109.712 - 30320.270: 98.7676% ( 2) 00:13:23.574 30320.270 - 30530.827: 98.8116% ( 4) 00:13:23.574 30530.827 - 30741.385: 98.8556% ( 4) 00:13:23.574 30951.942 - 31162.500: 98.9327% ( 7) 00:13:23.574 31162.500 - 31373.057: 99.0317% ( 9) 00:13:23.574 31373.057 - 31583.614: 99.1197% ( 8) 00:13:23.574 31583.614 - 31794.172: 99.1527% ( 3) 00:13:23.574 31794.172 - 32004.729: 99.2077% ( 5) 00:13:23.574 32004.729 - 32215.287: 99.2518% ( 4) 00:13:23.574 32215.287 - 32425.844: 99.2958% ( 4) 00:13:23.574 38953.124 - 39163.682: 99.3068% ( 1) 00:13:23.574 39163.682 - 39374.239: 99.3288% ( 2) 00:13:23.574 40005.912 - 40216.469: 99.3838% ( 5) 00:13:23.574 40216.469 - 40427.027: 99.4278% ( 4) 00:13:23.574 40427.027 - 40637.584: 99.4828% ( 5) 00:13:23.574 40637.584 - 40848.141: 99.5379% ( 5) 00:13:23.574 40848.141 - 41058.699: 99.6039% ( 6) 00:13:23.574 41058.699 - 41269.256: 99.6589% ( 5) 00:13:23.574 41269.256 - 41479.814: 99.7249% ( 6) 00:13:23.574 41479.814 - 41690.371: 99.7799% ( 5) 00:13:23.574 41690.371 - 41900.929: 99.8349% ( 5) 00:13:23.574 41900.929 - 42111.486: 99.8900% ( 5) 00:13:23.574 42111.486 - 42322.043: 99.9450% ( 5) 00:13:23.574 42322.043 - 42532.601: 100.0000% ( 5) 00:13:23.574 00:13:23.574 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:13:23.574 ============================================================================== 00:13:23.574 Range in us Cumulative IO count 00:13:23.574 9159.248 - 9211.888: 0.0110% ( 1) 00:13:23.574 9317.166 - 9369.806: 0.0330% ( 2) 00:13:23.574 9369.806 - 9422.445: 0.0880% ( 5) 00:13:23.574 9422.445 - 9475.084: 0.1430% ( 5) 00:13:23.574 9475.084 - 9527.724: 0.2091% ( 6) 00:13:23.574 9527.724 - 9580.363: 0.3961% ( 17) 00:13:23.574 9580.363 - 9633.002: 0.6162% ( 20) 00:13:23.574 9633.002 - 9685.642: 0.8143% ( 18) 00:13:23.574 9685.642 - 9738.281: 1.1664% ( 32) 00:13:23.574 9738.281 - 9790.920: 1.6065% ( 40) 00:13:23.574 9790.920 - 9843.560: 2.2117% ( 55) 00:13:23.574 9843.560 - 9896.199: 2.9049% ( 63) 00:13:23.574 9896.199 - 9948.839: 3.9173% ( 92) 00:13:23.574 9948.839 - 10001.478: 4.7865% ( 79) 00:13:23.574 10001.478 - 10054.117: 5.6888% ( 82) 00:13:23.574 10054.117 - 10106.757: 6.7342% ( 95) 00:13:23.574 10106.757 - 10159.396: 7.6585% ( 84) 00:13:23.574 10159.396 - 10212.035: 8.5607% ( 82) 00:13:23.574 10212.035 - 10264.675: 9.3530% ( 72) 00:13:23.574 10264.675 - 10317.314: 10.0792% ( 66) 00:13:23.574 10317.314 - 10369.953: 11.0805% ( 91) 00:13:23.574 10369.953 - 10422.593: 12.0709% ( 90) 00:13:23.574 10422.593 - 10475.232: 12.8961% ( 75) 00:13:23.574 10475.232 - 10527.871: 13.5783% ( 62) 00:13:23.574 10527.871 - 10580.511: 14.4806% ( 82) 00:13:23.574 10580.511 - 10633.150: 15.3389% ( 78) 00:13:23.574 10633.150 - 10685.790: 16.0211% ( 62) 00:13:23.574 10685.790 - 10738.429: 16.9124% ( 81) 00:13:23.574 10738.429 - 10791.068: 17.6166% ( 64) 00:13:23.574 10791.068 - 10843.708: 18.1888% ( 52) 00:13:23.574 10843.708 - 10896.347: 18.8490% ( 60) 00:13:23.574 10896.347 - 10948.986: 19.4762% ( 57) 00:13:23.574 10948.986 - 11001.626: 20.2685% ( 72) 00:13:23.574 11001.626 - 11054.265: 20.8737% ( 55) 00:13:23.574 11054.265 - 11106.904: 21.5889% ( 65) 00:13:23.574 11106.904 - 11159.544: 22.3482% ( 69) 00:13:23.574 11159.544 - 11212.183: 23.2064% ( 78) 00:13:23.574 11212.183 - 11264.822: 24.0427% ( 76) 00:13:23.574 11264.822 - 11317.462: 24.8239% ( 71) 00:13:23.575 11317.462 - 11370.101: 25.5832% ( 69) 00:13:23.575 11370.101 - 11422.741: 26.1884% ( 55) 00:13:23.575 11422.741 - 11475.380: 26.7165% ( 48) 00:13:23.575 11475.380 - 11528.019: 27.1457% ( 39) 00:13:23.575 11528.019 - 11580.659: 27.5748% ( 39) 00:13:23.575 11580.659 - 11633.298: 28.0590% ( 44) 00:13:23.575 11633.298 - 11685.937: 28.5761% ( 47) 00:13:23.575 11685.937 - 11738.577: 29.2364% ( 60) 00:13:23.575 11738.577 - 11791.216: 29.6545% ( 38) 00:13:23.575 11791.216 - 11843.855: 30.0726% ( 38) 00:13:23.575 11843.855 - 11896.495: 30.6888% ( 56) 00:13:23.575 11896.495 - 11949.134: 31.1070% ( 38) 00:13:23.575 11949.134 - 12001.773: 31.4151% ( 28) 00:13:23.575 12001.773 - 12054.413: 31.8442% ( 39) 00:13:23.575 12054.413 - 12107.052: 32.3063% ( 42) 00:13:23.575 12107.052 - 12159.692: 32.7795% ( 43) 00:13:23.575 12159.692 - 12212.331: 33.3627% ( 53) 00:13:23.575 12212.331 - 12264.970: 34.1659% ( 73) 00:13:23.575 12264.970 - 12317.610: 35.0352% ( 79) 00:13:23.575 12317.610 - 12370.249: 35.8275% ( 72) 00:13:23.575 12370.249 - 12422.888: 36.4987% ( 61) 00:13:23.575 12422.888 - 12475.528: 37.2799% ( 71) 00:13:23.575 12475.528 - 12528.167: 38.0392% ( 69) 00:13:23.575 12528.167 - 12580.806: 38.6554% ( 56) 00:13:23.575 12580.806 - 12633.446: 39.1615% ( 46) 00:13:23.575 12633.446 - 12686.085: 39.6787% ( 47) 00:13:23.575 12686.085 - 12738.724: 40.3169% ( 58) 00:13:23.575 12738.724 - 12791.364: 40.9331% ( 56) 00:13:23.575 12791.364 - 12844.003: 41.5383% ( 55) 00:13:23.575 12844.003 - 12896.643: 42.2425% ( 64) 00:13:23.575 12896.643 - 12949.282: 42.8587% ( 56) 00:13:23.575 12949.282 - 13001.921: 43.7500% ( 81) 00:13:23.575 13001.921 - 13054.561: 44.4652% ( 65) 00:13:23.575 13054.561 - 13107.200: 45.1364% ( 61) 00:13:23.575 13107.200 - 13159.839: 45.7306% ( 54) 00:13:23.575 13159.839 - 13212.479: 46.1708% ( 40) 00:13:23.575 13212.479 - 13265.118: 46.7210% ( 50) 00:13:23.575 13265.118 - 13317.757: 47.0841% ( 33) 00:13:23.575 13317.757 - 13370.397: 47.4802% ( 36) 00:13:23.575 13370.397 - 13423.036: 48.0304% ( 50) 00:13:23.575 13423.036 - 13475.676: 48.4925% ( 42) 00:13:23.575 13475.676 - 13580.954: 49.6369% ( 104) 00:13:23.575 13580.954 - 13686.233: 50.6602% ( 93) 00:13:23.575 13686.233 - 13791.512: 51.7716% ( 101) 00:13:23.575 13791.512 - 13896.790: 52.8609% ( 99) 00:13:23.575 13896.790 - 14002.069: 54.0603% ( 109) 00:13:23.575 14002.069 - 14107.348: 55.0396% ( 89) 00:13:23.575 14107.348 - 14212.627: 55.8099% ( 70) 00:13:23.575 14212.627 - 14317.905: 56.7452% ( 85) 00:13:23.575 14317.905 - 14423.184: 57.3834% ( 58) 00:13:23.575 14423.184 - 14528.463: 58.1316% ( 68) 00:13:23.575 14528.463 - 14633.741: 59.1989% ( 97) 00:13:23.575 14633.741 - 14739.020: 60.4203% ( 111) 00:13:23.575 14739.020 - 14844.299: 61.6527% ( 112) 00:13:23.575 14844.299 - 14949.578: 62.6320% ( 89) 00:13:23.575 14949.578 - 15054.856: 63.5563% ( 84) 00:13:23.575 15054.856 - 15160.135: 65.0748% ( 138) 00:13:23.575 15160.135 - 15265.414: 66.6043% ( 139) 00:13:23.575 15265.414 - 15370.692: 67.7817% ( 107) 00:13:23.575 15370.692 - 15475.971: 68.9811% ( 109) 00:13:23.575 15475.971 - 15581.250: 70.6316% ( 150) 00:13:23.575 15581.250 - 15686.529: 72.1281% ( 136) 00:13:23.575 15686.529 - 15791.807: 73.2614% ( 103) 00:13:23.575 15791.807 - 15897.086: 74.2298% ( 88) 00:13:23.575 15897.086 - 16002.365: 75.3191% ( 99) 00:13:23.575 16002.365 - 16107.643: 76.2654% ( 86) 00:13:23.575 16107.643 - 16212.922: 77.0246% ( 69) 00:13:23.575 16212.922 - 16318.201: 78.0040% ( 89) 00:13:23.575 16318.201 - 16423.480: 78.8512% ( 77) 00:13:23.575 16423.480 - 16528.758: 79.7095% ( 78) 00:13:23.575 16528.758 - 16634.037: 80.4137% ( 64) 00:13:23.575 16634.037 - 16739.316: 81.1730% ( 69) 00:13:23.575 16739.316 - 16844.594: 82.1743% ( 91) 00:13:23.575 16844.594 - 16949.873: 82.9776% ( 73) 00:13:23.575 16949.873 - 17055.152: 83.6818% ( 64) 00:13:23.575 17055.152 - 17160.431: 84.5070% ( 75) 00:13:23.575 17160.431 - 17265.709: 85.4313% ( 84) 00:13:23.575 17265.709 - 17370.988: 86.3556% ( 84) 00:13:23.575 17370.988 - 17476.267: 87.6871% ( 121) 00:13:23.575 17476.267 - 17581.545: 88.4793% ( 72) 00:13:23.575 17581.545 - 17686.824: 89.1285% ( 59) 00:13:23.575 17686.824 - 17792.103: 89.9318% ( 73) 00:13:23.575 17792.103 - 17897.382: 90.6030% ( 61) 00:13:23.575 17897.382 - 18002.660: 91.0651% ( 42) 00:13:23.575 18002.660 - 18107.939: 91.6703% ( 55) 00:13:23.575 18107.939 - 18213.218: 92.1105% ( 40) 00:13:23.575 18213.218 - 18318.496: 92.3636% ( 23) 00:13:23.575 18318.496 - 18423.775: 92.6056% ( 22) 00:13:23.575 18423.775 - 18529.054: 92.9247% ( 29) 00:13:23.575 18529.054 - 18634.333: 93.2108% ( 26) 00:13:23.575 18634.333 - 18739.611: 93.6730% ( 42) 00:13:23.575 18739.611 - 18844.890: 94.0581% ( 35) 00:13:23.575 18844.890 - 18950.169: 94.3992% ( 31) 00:13:23.575 18950.169 - 19055.447: 94.7073% ( 28) 00:13:23.575 19055.447 - 19160.726: 94.9824% ( 25) 00:13:23.575 19160.726 - 19266.005: 95.3235% ( 31) 00:13:23.575 19266.005 - 19371.284: 95.6866% ( 33) 00:13:23.575 19371.284 - 19476.562: 96.0277% ( 31) 00:13:23.575 19476.562 - 19581.841: 96.3578% ( 30) 00:13:23.575 19581.841 - 19687.120: 96.6659% ( 28) 00:13:23.575 19687.120 - 19792.398: 97.0841% ( 38) 00:13:23.575 19792.398 - 19897.677: 97.3592% ( 25) 00:13:23.575 19897.677 - 20002.956: 97.5682% ( 19) 00:13:23.575 20002.956 - 20108.235: 97.7553% ( 17) 00:13:23.575 20108.235 - 20213.513: 97.9203% ( 15) 00:13:23.575 20213.513 - 20318.792: 98.0524% ( 12) 00:13:23.575 20318.792 - 20424.071: 98.1294% ( 7) 00:13:23.575 20424.071 - 20529.349: 98.2064% ( 7) 00:13:23.575 20529.349 - 20634.628: 98.2724% ( 6) 00:13:23.575 20634.628 - 20739.907: 98.3275% ( 5) 00:13:23.575 20739.907 - 20845.186: 98.4045% ( 7) 00:13:23.575 20845.186 - 20950.464: 98.4925% ( 8) 00:13:23.575 20950.464 - 21055.743: 98.5695% ( 7) 00:13:23.575 21055.743 - 21161.022: 98.5915% ( 2) 00:13:23.575 29056.925 - 29267.483: 98.7126% ( 11) 00:13:23.575 29267.483 - 29478.040: 98.8226% ( 10) 00:13:23.575 29478.040 - 29688.598: 98.9767% ( 14) 00:13:23.575 29688.598 - 29899.155: 99.0757% ( 9) 00:13:23.575 29899.155 - 30109.712: 99.1197% ( 4) 00:13:23.575 30109.712 - 30320.270: 99.1527% ( 3) 00:13:23.575 30320.270 - 30530.827: 99.1857% ( 3) 00:13:23.575 30530.827 - 30741.385: 99.2188% ( 3) 00:13:23.575 30741.385 - 30951.942: 99.2738% ( 5) 00:13:23.575 30951.942 - 31162.500: 99.2958% ( 2) 00:13:23.575 37268.665 - 37479.222: 99.3508% ( 5) 00:13:23.575 38532.010 - 38742.567: 99.3728% ( 2) 00:13:23.575 38742.567 - 38953.124: 99.4058% ( 3) 00:13:23.575 38953.124 - 39163.682: 99.4608% ( 5) 00:13:23.575 39163.682 - 39374.239: 99.5048% ( 4) 00:13:23.575 39374.239 - 39584.797: 99.5599% ( 5) 00:13:23.575 39584.797 - 39795.354: 99.6039% ( 4) 00:13:23.575 39795.354 - 40005.912: 99.6589% ( 5) 00:13:23.575 40005.912 - 40216.469: 99.7249% ( 6) 00:13:23.575 40216.469 - 40427.027: 99.7799% ( 5) 00:13:23.575 40427.027 - 40637.584: 99.8239% ( 4) 00:13:23.575 40637.584 - 40848.141: 99.8900% ( 6) 00:13:23.575 40848.141 - 41058.699: 99.9450% ( 5) 00:13:23.575 41058.699 - 41269.256: 100.0000% ( 5) 00:13:23.575 00:13:23.575 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:13:23.575 ============================================================================== 00:13:23.575 Range in us Cumulative IO count 00:13:23.575 9211.888 - 9264.527: 0.0110% ( 1) 00:13:23.575 9317.166 - 9369.806: 0.0440% ( 3) 00:13:23.575 9369.806 - 9422.445: 0.1430% ( 9) 00:13:23.575 9422.445 - 9475.084: 0.2421% ( 9) 00:13:23.575 9475.084 - 9527.724: 0.3961% ( 14) 00:13:23.575 9527.724 - 9580.363: 0.5722% ( 16) 00:13:23.575 9580.363 - 9633.002: 0.6712% ( 9) 00:13:23.575 9633.002 - 9685.642: 0.9463% ( 25) 00:13:23.575 9685.642 - 9738.281: 1.2544% ( 28) 00:13:23.575 9738.281 - 9790.920: 1.6615% ( 37) 00:13:23.575 9790.920 - 9843.560: 2.1567% ( 45) 00:13:23.575 9843.560 - 9896.199: 3.1140% ( 87) 00:13:23.575 9896.199 - 9948.839: 4.1593% ( 95) 00:13:23.575 9948.839 - 10001.478: 4.9406% ( 71) 00:13:23.575 10001.478 - 10054.117: 6.1730% ( 112) 00:13:23.575 10054.117 - 10106.757: 7.1193% ( 86) 00:13:23.575 10106.757 - 10159.396: 8.0436% ( 84) 00:13:23.575 10159.396 - 10212.035: 9.0559% ( 92) 00:13:23.575 10212.035 - 10264.675: 10.0902% ( 94) 00:13:23.575 10264.675 - 10317.314: 11.1796% ( 99) 00:13:23.575 10317.314 - 10369.953: 12.2689% ( 99) 00:13:23.575 10369.953 - 10422.593: 13.0612% ( 72) 00:13:23.575 10422.593 - 10475.232: 13.7104% ( 59) 00:13:23.575 10475.232 - 10527.871: 14.3926% ( 62) 00:13:23.575 10527.871 - 10580.511: 15.2399% ( 77) 00:13:23.575 10580.511 - 10633.150: 16.2522% ( 92) 00:13:23.575 10633.150 - 10685.790: 16.9784% ( 66) 00:13:23.575 10685.790 - 10738.429: 17.6717% ( 63) 00:13:23.575 10738.429 - 10791.068: 18.4089% ( 67) 00:13:23.575 10791.068 - 10843.708: 18.9040% ( 45) 00:13:23.575 10843.708 - 10896.347: 19.2672% ( 33) 00:13:23.575 10896.347 - 10948.986: 19.8283% ( 51) 00:13:23.575 10948.986 - 11001.626: 20.5326% ( 64) 00:13:23.575 11001.626 - 11054.265: 21.0827% ( 50) 00:13:23.575 11054.265 - 11106.904: 21.6549% ( 52) 00:13:23.575 11106.904 - 11159.544: 22.4692% ( 74) 00:13:23.575 11159.544 - 11212.183: 23.4155% ( 86) 00:13:23.575 11212.183 - 11264.822: 24.1747% ( 69) 00:13:23.575 11264.822 - 11317.462: 24.9340% ( 69) 00:13:23.575 11317.462 - 11370.101: 25.6492% ( 65) 00:13:23.575 11370.101 - 11422.741: 26.3204% ( 61) 00:13:23.575 11422.741 - 11475.380: 26.9476% ( 57) 00:13:23.575 11475.380 - 11528.019: 27.2997% ( 32) 00:13:23.575 11528.019 - 11580.659: 27.8499% ( 50) 00:13:23.575 11580.659 - 11633.298: 28.3671% ( 47) 00:13:23.575 11633.298 - 11685.937: 29.0273% ( 60) 00:13:23.575 11685.937 - 11738.577: 29.5775% ( 50) 00:13:23.575 11738.577 - 11791.216: 30.0506% ( 43) 00:13:23.575 11791.216 - 11843.855: 30.5348% ( 44) 00:13:23.575 11843.855 - 11896.495: 31.0299% ( 45) 00:13:23.575 11896.495 - 11949.134: 31.6241% ( 54) 00:13:23.575 11949.134 - 12001.773: 32.2953% ( 61) 00:13:23.575 12001.773 - 12054.413: 32.8455% ( 50) 00:13:23.575 12054.413 - 12107.052: 33.2967% ( 41) 00:13:23.575 12107.052 - 12159.692: 33.8028% ( 46) 00:13:23.575 12159.692 - 12212.331: 34.2430% ( 40) 00:13:23.575 12212.331 - 12264.970: 34.8371% ( 54) 00:13:23.575 12264.970 - 12317.610: 35.4864% ( 59) 00:13:23.575 12317.610 - 12370.249: 36.2236% ( 67) 00:13:23.575 12370.249 - 12422.888: 36.8178% ( 54) 00:13:23.575 12422.888 - 12475.528: 37.4450% ( 57) 00:13:23.575 12475.528 - 12528.167: 38.1382% ( 63) 00:13:23.575 12528.167 - 12580.806: 38.7434% ( 55) 00:13:23.575 12580.806 - 12633.446: 39.3266% ( 53) 00:13:23.575 12633.446 - 12686.085: 39.8548% ( 48) 00:13:23.575 12686.085 - 12738.724: 40.5260% ( 61) 00:13:23.575 12738.724 - 12791.364: 41.5933% ( 97) 00:13:23.575 12791.364 - 12844.003: 42.2645% ( 61) 00:13:23.575 12844.003 - 12896.643: 42.9137% ( 59) 00:13:23.575 12896.643 - 12949.282: 43.4419% ( 48) 00:13:23.575 12949.282 - 13001.921: 43.9481% ( 46) 00:13:23.575 13001.921 - 13054.561: 44.6523% ( 64) 00:13:23.575 13054.561 - 13107.200: 45.1805% ( 48) 00:13:23.575 13107.200 - 13159.839: 45.7857% ( 55) 00:13:23.575 13159.839 - 13212.479: 46.2918% ( 46) 00:13:23.575 13212.479 - 13265.118: 46.7650% ( 43) 00:13:23.575 13265.118 - 13317.757: 47.2161% ( 41) 00:13:23.575 13317.757 - 13370.397: 47.7443% ( 48) 00:13:23.575 13370.397 - 13423.036: 48.4705% ( 66) 00:13:23.575 13423.036 - 13475.676: 49.0867% ( 56) 00:13:23.575 13475.676 - 13580.954: 50.1430% ( 96) 00:13:23.575 13580.954 - 13686.233: 51.1444% ( 91) 00:13:23.575 13686.233 - 13791.512: 52.1017% ( 87) 00:13:23.575 13791.512 - 13896.790: 53.1140% ( 92) 00:13:23.575 13896.790 - 14002.069: 54.2364% ( 102) 00:13:23.575 14002.069 - 14107.348: 55.4137% ( 107) 00:13:23.575 14107.348 - 14212.627: 56.2720% ( 78) 00:13:23.575 14212.627 - 14317.905: 56.9762% ( 64) 00:13:23.575 14317.905 - 14423.184: 58.0546% ( 98) 00:13:23.575 14423.184 - 14528.463: 59.3090% ( 114) 00:13:23.575 14528.463 - 14633.741: 60.1562% ( 77) 00:13:23.575 14633.741 - 14739.020: 60.8495% ( 63) 00:13:23.575 14739.020 - 14844.299: 61.5977% ( 68) 00:13:23.575 14844.299 - 14949.578: 62.5110% ( 83) 00:13:23.575 14949.578 - 15054.856: 63.2812% ( 70) 00:13:23.575 15054.856 - 15160.135: 63.9855% ( 64) 00:13:23.575 15160.135 - 15265.414: 64.8217% ( 76) 00:13:23.575 15265.414 - 15370.692: 66.0211% ( 109) 00:13:23.575 15370.692 - 15475.971: 67.3526% ( 121) 00:13:23.575 15475.971 - 15581.250: 68.6840% ( 121) 00:13:23.575 15581.250 - 15686.529: 70.2685% ( 144) 00:13:23.575 15686.529 - 15791.807: 71.6879% ( 129) 00:13:23.575 15791.807 - 15897.086: 72.8653% ( 107) 00:13:23.575 15897.086 - 16002.365: 73.9437% ( 98) 00:13:23.575 16002.365 - 16107.643: 75.0880% ( 104) 00:13:23.575 16107.643 - 16212.922: 76.2214% ( 103) 00:13:23.575 16212.922 - 16318.201: 77.3988% ( 107) 00:13:23.575 16318.201 - 16423.480: 78.5211% ( 102) 00:13:23.575 16423.480 - 16528.758: 79.4234% ( 82) 00:13:23.575 16528.758 - 16634.037: 80.4908% ( 97) 00:13:23.575 16634.037 - 16739.316: 81.5471% ( 96) 00:13:23.575 16739.316 - 16844.594: 82.8565% ( 119) 00:13:23.575 16844.594 - 16949.873: 83.6158% ( 69) 00:13:23.575 16949.873 - 17055.152: 84.2980% ( 62) 00:13:23.575 17055.152 - 17160.431: 84.9692% ( 61) 00:13:23.575 17160.431 - 17265.709: 85.5964% ( 57) 00:13:23.575 17265.709 - 17370.988: 86.1466% ( 50) 00:13:23.575 17370.988 - 17476.267: 86.9828% ( 76) 00:13:23.575 17476.267 - 17581.545: 87.5440% ( 51) 00:13:23.575 17581.545 - 17686.824: 88.1382% ( 54) 00:13:23.575 17686.824 - 17792.103: 89.0295% ( 81) 00:13:23.575 17792.103 - 17897.382: 89.6787% ( 59) 00:13:23.575 17897.382 - 18002.660: 90.5480% ( 79) 00:13:23.575 18002.660 - 18107.939: 91.0982% ( 50) 00:13:23.575 18107.939 - 18213.218: 91.6263% ( 48) 00:13:23.575 18213.218 - 18318.496: 92.3526% ( 66) 00:13:23.575 18318.496 - 18423.775: 92.7487% ( 36) 00:13:23.575 18423.775 - 18529.054: 93.0898% ( 31) 00:13:23.575 18529.054 - 18634.333: 93.5079% ( 38) 00:13:23.575 18634.333 - 18739.611: 93.8820% ( 34) 00:13:23.575 18739.611 - 18844.890: 94.2232% ( 31) 00:13:23.575 18844.890 - 18950.169: 94.5092% ( 26) 00:13:23.575 18950.169 - 19055.447: 94.7733% ( 24) 00:13:23.575 19055.447 - 19160.726: 95.0484% ( 25) 00:13:23.575 19160.726 - 19266.005: 95.3235% ( 25) 00:13:23.575 19266.005 - 19371.284: 95.6096% ( 26) 00:13:23.575 19371.284 - 19476.562: 95.8627% ( 23) 00:13:23.575 19476.562 - 19581.841: 96.4239% ( 51) 00:13:23.575 19581.841 - 19687.120: 96.6219% ( 18) 00:13:23.575 19687.120 - 19792.398: 96.8530% ( 21) 00:13:23.575 19792.398 - 19897.677: 97.0731% ( 20) 00:13:23.575 19897.677 - 20002.956: 97.3041% ( 21) 00:13:23.575 20002.956 - 20108.235: 97.7113% ( 37) 00:13:23.575 20108.235 - 20213.513: 97.9754% ( 24) 00:13:23.575 20213.513 - 20318.792: 98.0854% ( 10) 00:13:23.575 20318.792 - 20424.071: 98.2284% ( 13) 00:13:23.575 20424.071 - 20529.349: 98.3495% ( 11) 00:13:23.575 20529.349 - 20634.628: 98.4155% ( 6) 00:13:23.575 20634.628 - 20739.907: 98.4595% ( 4) 00:13:23.575 20739.907 - 20845.186: 98.5035% ( 4) 00:13:23.575 20845.186 - 20950.464: 98.5585% ( 5) 00:13:23.575 20950.464 - 21055.743: 98.5915% ( 3) 00:13:23.575 27583.023 - 27793.581: 98.6246% ( 3) 00:13:23.575 27793.581 - 28004.138: 98.7566% ( 12) 00:13:23.575 28004.138 - 28214.696: 98.8996% ( 13) 00:13:23.575 28214.696 - 28425.253: 99.0427% ( 13) 00:13:23.575 28425.253 - 28635.810: 99.0977% ( 5) 00:13:23.575 28635.810 - 28846.368: 99.1417% ( 4) 00:13:23.575 28846.368 - 29056.925: 99.1747% ( 3) 00:13:23.575 29056.925 - 29267.483: 99.2077% ( 3) 00:13:23.575 29267.483 - 29478.040: 99.2408% ( 3) 00:13:23.575 29478.040 - 29688.598: 99.2848% ( 4) 00:13:23.575 29688.598 - 29899.155: 99.2958% ( 1) 00:13:23.575 36005.320 - 36215.878: 99.3288% ( 3) 00:13:23.575 37268.665 - 37479.222: 99.3728% ( 4) 00:13:23.575 37479.222 - 37689.780: 99.4168% ( 4) 00:13:23.575 37689.780 - 37900.337: 99.4718% ( 5) 00:13:23.575 37900.337 - 38110.895: 99.5268% ( 5) 00:13:23.575 38110.895 - 38321.452: 99.5929% ( 6) 00:13:23.575 38321.452 - 38532.010: 99.6479% ( 5) 00:13:23.575 38532.010 - 38742.567: 99.7139% ( 6) 00:13:23.575 38742.567 - 38953.124: 99.7799% ( 6) 00:13:23.575 38953.124 - 39163.682: 99.8460% ( 6) 00:13:23.575 39163.682 - 39374.239: 99.9120% ( 6) 00:13:23.575 39374.239 - 39584.797: 99.9780% ( 6) 00:13:23.575 39584.797 - 39795.354: 100.0000% ( 2) 00:13:23.575 00:13:23.575 11:24:05 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:13:23.575 00:13:23.575 real 0m2.720s 00:13:23.575 user 0m2.278s 00:13:23.575 sys 0m0.326s 00:13:23.575 11:24:05 nvme.nvme_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:23.575 11:24:05 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:13:23.575 ************************************ 00:13:23.575 END TEST nvme_perf 00:13:23.575 ************************************ 00:13:23.575 11:24:05 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:13:23.575 11:24:05 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:23.575 11:24:05 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:23.575 11:24:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:23.575 ************************************ 00:13:23.575 START TEST nvme_hello_world 00:13:23.575 ************************************ 00:13:23.575 11:24:05 nvme.nvme_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:13:23.834 Initializing NVMe Controllers 00:13:23.834 Attached to 0000:00:10.0 00:13:23.834 Namespace ID: 1 size: 6GB 00:13:23.834 Attached to 0000:00:11.0 00:13:23.834 Namespace ID: 1 size: 5GB 00:13:23.834 Attached to 0000:00:13.0 00:13:23.834 Namespace ID: 1 size: 1GB 00:13:23.834 Attached to 0000:00:12.0 00:13:23.834 Namespace ID: 1 size: 4GB 00:13:23.834 Namespace ID: 2 size: 4GB 00:13:23.834 Namespace ID: 3 size: 4GB 00:13:23.834 Initialization complete. 00:13:23.834 INFO: using host memory buffer for IO 00:13:23.834 Hello world! 00:13:23.834 INFO: using host memory buffer for IO 00:13:23.834 Hello world! 00:13:23.834 INFO: using host memory buffer for IO 00:13:23.834 Hello world! 00:13:23.834 INFO: using host memory buffer for IO 00:13:23.834 Hello world! 00:13:23.834 INFO: using host memory buffer for IO 00:13:23.834 Hello world! 00:13:23.834 INFO: using host memory buffer for IO 00:13:23.834 Hello world! 00:13:23.834 00:13:23.834 real 0m0.291s 00:13:23.834 user 0m0.093s 00:13:23.834 sys 0m0.155s 00:13:23.834 11:24:05 nvme.nvme_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:23.834 ************************************ 00:13:23.834 11:24:05 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:13:23.834 END TEST nvme_hello_world 00:13:23.834 ************************************ 00:13:24.094 11:24:05 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:13:24.094 11:24:05 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:24.094 11:24:05 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:24.094 11:24:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:24.094 ************************************ 00:13:24.094 START TEST nvme_sgl 00:13:24.094 ************************************ 00:13:24.094 11:24:05 nvme.nvme_sgl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:13:24.379 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:13:24.379 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:13:24.379 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:13:24.379 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:13:24.379 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:13:24.379 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:13:24.379 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:13:24.379 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:13:24.379 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:13:24.379 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:13:24.379 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:13:24.379 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:13:24.379 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:13:24.379 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:13:24.379 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:13:24.379 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:13:24.379 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:13:24.379 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:13:24.379 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:13:24.379 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:13:24.379 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:13:24.379 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:13:24.379 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:13:24.379 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:13:24.379 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:13:24.379 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:13:24.379 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:13:24.379 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:13:24.379 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:13:24.379 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:13:24.379 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:13:24.379 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:13:24.379 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:13:24.379 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:13:24.379 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:13:24.379 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:13:24.379 NVMe Readv/Writev Request test 00:13:24.379 Attached to 0000:00:10.0 00:13:24.379 Attached to 0000:00:11.0 00:13:24.379 Attached to 0000:00:13.0 00:13:24.379 Attached to 0000:00:12.0 00:13:24.379 0000:00:10.0: build_io_request_2 test passed 00:13:24.379 0000:00:10.0: build_io_request_4 test passed 00:13:24.379 0000:00:10.0: build_io_request_5 test passed 00:13:24.379 0000:00:10.0: build_io_request_6 test passed 00:13:24.380 0000:00:10.0: build_io_request_7 test passed 00:13:24.380 0000:00:10.0: build_io_request_10 test passed 00:13:24.380 0000:00:11.0: build_io_request_2 test passed 00:13:24.380 0000:00:11.0: build_io_request_4 test passed 00:13:24.380 0000:00:11.0: build_io_request_5 test passed 00:13:24.380 0000:00:11.0: build_io_request_6 test passed 00:13:24.380 0000:00:11.0: build_io_request_7 test passed 00:13:24.380 0000:00:11.0: build_io_request_10 test passed 00:13:24.380 Cleaning up... 00:13:24.380 ************************************ 00:13:24.380 END TEST nvme_sgl 00:13:24.380 ************************************ 00:13:24.380 00:13:24.380 real 0m0.366s 00:13:24.380 user 0m0.165s 00:13:24.380 sys 0m0.155s 00:13:24.380 11:24:05 nvme.nvme_sgl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:24.380 11:24:05 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:13:24.380 11:24:06 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:13:24.380 11:24:06 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:24.380 11:24:06 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:24.380 11:24:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:24.380 ************************************ 00:13:24.380 START TEST nvme_e2edp 00:13:24.380 ************************************ 00:13:24.380 11:24:06 nvme.nvme_e2edp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:13:24.658 NVMe Write/Read with End-to-End data protection test 00:13:24.658 Attached to 0000:00:10.0 00:13:24.658 Attached to 0000:00:11.0 00:13:24.658 Attached to 0000:00:13.0 00:13:24.658 Attached to 0000:00:12.0 00:13:24.658 Cleaning up... 00:13:24.658 00:13:24.658 real 0m0.284s 00:13:24.658 user 0m0.090s 00:13:24.658 sys 0m0.147s 00:13:24.658 ************************************ 00:13:24.658 END TEST nvme_e2edp 00:13:24.658 ************************************ 00:13:24.658 11:24:06 nvme.nvme_e2edp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:24.658 11:24:06 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:13:24.923 11:24:06 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:13:24.923 11:24:06 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:24.923 11:24:06 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:24.923 11:24:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:24.923 ************************************ 00:13:24.923 START TEST nvme_reserve 00:13:24.923 ************************************ 00:13:24.923 11:24:06 nvme.nvme_reserve -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:13:25.190 ===================================================== 00:13:25.190 NVMe Controller at PCI bus 0, device 16, function 0 00:13:25.190 ===================================================== 00:13:25.190 Reservations: Not Supported 00:13:25.190 ===================================================== 00:13:25.190 NVMe Controller at PCI bus 0, device 17, function 0 00:13:25.190 ===================================================== 00:13:25.190 Reservations: Not Supported 00:13:25.190 ===================================================== 00:13:25.190 NVMe Controller at PCI bus 0, device 19, function 0 00:13:25.190 ===================================================== 00:13:25.190 Reservations: Not Supported 00:13:25.190 ===================================================== 00:13:25.190 NVMe Controller at PCI bus 0, device 18, function 0 00:13:25.190 ===================================================== 00:13:25.190 Reservations: Not Supported 00:13:25.190 Reservation test passed 00:13:25.190 ************************************ 00:13:25.190 END TEST nvme_reserve 00:13:25.190 ************************************ 00:13:25.190 00:13:25.190 real 0m0.287s 00:13:25.190 user 0m0.113s 00:13:25.190 sys 0m0.134s 00:13:25.190 11:24:06 nvme.nvme_reserve -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:25.190 11:24:06 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:13:25.190 11:24:06 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:13:25.190 11:24:06 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:25.190 11:24:06 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:25.190 11:24:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:25.190 ************************************ 00:13:25.190 START TEST nvme_err_injection 00:13:25.190 ************************************ 00:13:25.190 11:24:06 nvme.nvme_err_injection -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:13:25.450 NVMe Error Injection test 00:13:25.450 Attached to 0000:00:10.0 00:13:25.450 Attached to 0000:00:11.0 00:13:25.450 Attached to 0000:00:13.0 00:13:25.450 Attached to 0000:00:12.0 00:13:25.450 0000:00:11.0: get features failed as expected 00:13:25.450 0000:00:13.0: get features failed as expected 00:13:25.450 0000:00:12.0: get features failed as expected 00:13:25.450 0000:00:10.0: get features failed as expected 00:13:25.450 0000:00:11.0: get features successfully as expected 00:13:25.450 0000:00:13.0: get features successfully as expected 00:13:25.450 0000:00:12.0: get features successfully as expected 00:13:25.450 0000:00:10.0: get features successfully as expected 00:13:25.450 0000:00:10.0: read failed as expected 00:13:25.450 0000:00:11.0: read failed as expected 00:13:25.450 0000:00:13.0: read failed as expected 00:13:25.450 0000:00:12.0: read failed as expected 00:13:25.450 0000:00:10.0: read successfully as expected 00:13:25.450 0000:00:11.0: read successfully as expected 00:13:25.450 0000:00:13.0: read successfully as expected 00:13:25.450 0000:00:12.0: read successfully as expected 00:13:25.450 Cleaning up... 00:13:25.451 ************************************ 00:13:25.451 END TEST nvme_err_injection 00:13:25.451 ************************************ 00:13:25.451 00:13:25.451 real 0m0.291s 00:13:25.451 user 0m0.113s 00:13:25.451 sys 0m0.134s 00:13:25.451 11:24:07 nvme.nvme_err_injection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:25.451 11:24:07 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:13:25.451 11:24:07 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:13:25.451 11:24:07 nvme -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:13:25.451 11:24:07 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:25.451 11:24:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:25.451 ************************************ 00:13:25.451 START TEST nvme_overhead 00:13:25.451 ************************************ 00:13:25.451 11:24:07 nvme.nvme_overhead -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:13:26.825 Initializing NVMe Controllers 00:13:26.825 Attached to 0000:00:10.0 00:13:26.825 Attached to 0000:00:11.0 00:13:26.825 Attached to 0000:00:13.0 00:13:26.825 Attached to 0000:00:12.0 00:13:26.825 Initialization complete. Launching workers. 00:13:26.825 submit (in ns) avg, min, max = 13422.2, 11139.0, 99396.0 00:13:26.825 complete (in ns) avg, min, max = 8187.1, 7666.7, 103919.7 00:13:26.825 00:13:26.825 Submit histogram 00:13:26.825 ================ 00:13:26.825 Range in us Cumulative Count 00:13:26.825 11.104 - 11.155: 0.0132% ( 1) 00:13:26.825 11.361 - 11.412: 0.0263% ( 1) 00:13:26.825 11.669 - 11.720: 0.0395% ( 1) 00:13:26.825 11.720 - 11.772: 0.0526% ( 1) 00:13:26.825 11.875 - 11.926: 0.0658% ( 1) 00:13:26.825 12.029 - 12.080: 0.0789% ( 1) 00:13:26.825 12.440 - 12.492: 0.1184% ( 3) 00:13:26.825 12.492 - 12.543: 0.2236% ( 8) 00:13:26.825 12.543 - 12.594: 0.7759% ( 42) 00:13:26.825 12.594 - 12.646: 2.1831% ( 107) 00:13:26.825 12.646 - 12.697: 5.2867% ( 236) 00:13:26.825 12.697 - 12.749: 10.5471% ( 400) 00:13:26.825 12.749 - 12.800: 18.0826% ( 573) 00:13:26.825 12.800 - 12.851: 26.4992% ( 640) 00:13:26.825 12.851 - 12.903: 34.5871% ( 615) 00:13:26.825 12.903 - 12.954: 41.5965% ( 533) 00:13:26.825 12.954 - 13.006: 48.7901% ( 547) 00:13:26.825 13.006 - 13.057: 55.2998% ( 495) 00:13:26.825 13.057 - 13.108: 61.7438% ( 490) 00:13:26.825 13.108 - 13.160: 66.5571% ( 366) 00:13:26.825 13.160 - 13.263: 75.7102% ( 696) 00:13:26.825 13.263 - 13.365: 82.1278% ( 488) 00:13:26.825 13.365 - 13.468: 86.5334% ( 335) 00:13:26.825 13.468 - 13.571: 89.4135% ( 219) 00:13:26.825 13.571 - 13.674: 91.7017% ( 174) 00:13:26.825 13.674 - 13.777: 93.0957% ( 106) 00:13:26.825 13.777 - 13.880: 94.0821% ( 75) 00:13:26.825 13.880 - 13.982: 94.3977% ( 24) 00:13:26.825 13.982 - 14.085: 94.6081% ( 16) 00:13:26.825 14.085 - 14.188: 94.7133% ( 8) 00:13:26.825 14.188 - 14.291: 94.7791% ( 5) 00:13:26.825 14.291 - 14.394: 94.8580% ( 6) 00:13:26.825 14.394 - 14.496: 94.8974% ( 3) 00:13:26.825 14.599 - 14.702: 94.9237% ( 2) 00:13:26.825 14.702 - 14.805: 94.9632% ( 3) 00:13:26.825 14.805 - 14.908: 94.9763% ( 1) 00:13:26.825 14.908 - 15.010: 95.0026% ( 2) 00:13:26.825 15.010 - 15.113: 95.0158% ( 1) 00:13:26.825 15.216 - 15.319: 95.0421% ( 2) 00:13:26.825 15.319 - 15.422: 95.0552% ( 1) 00:13:26.825 15.524 - 15.627: 95.0684% ( 1) 00:13:26.825 15.627 - 15.730: 95.0815% ( 1) 00:13:26.825 15.833 - 15.936: 95.0947% ( 1) 00:13:26.825 16.039 - 16.141: 95.1078% ( 1) 00:13:26.825 16.141 - 16.244: 95.1210% ( 1) 00:13:26.825 16.347 - 16.450: 95.1473% ( 2) 00:13:26.825 16.450 - 16.553: 95.1736% ( 2) 00:13:26.825 16.758 - 16.861: 95.2130% ( 3) 00:13:26.825 16.861 - 16.964: 95.2393% ( 2) 00:13:26.825 17.067 - 17.169: 95.2920% ( 4) 00:13:26.825 17.169 - 17.272: 95.3972% ( 8) 00:13:26.825 17.272 - 17.375: 95.5418% ( 11) 00:13:26.825 17.375 - 17.478: 95.7391% ( 15) 00:13:26.825 17.478 - 17.581: 95.9627% ( 17) 00:13:26.825 17.581 - 17.684: 96.1994% ( 18) 00:13:26.825 17.684 - 17.786: 96.4229% ( 17) 00:13:26.825 17.786 - 17.889: 96.6334% ( 16) 00:13:26.825 17.889 - 17.992: 96.7780% ( 11) 00:13:26.825 17.992 - 18.095: 96.9227% ( 11) 00:13:26.825 18.095 - 18.198: 97.0805% ( 12) 00:13:26.825 18.198 - 18.300: 97.1988% ( 9) 00:13:26.825 18.300 - 18.403: 97.3830% ( 14) 00:13:26.825 18.403 - 18.506: 97.4487% ( 5) 00:13:26.825 18.506 - 18.609: 97.5539% ( 8) 00:13:26.825 18.609 - 18.712: 97.6723% ( 9) 00:13:26.825 18.712 - 18.814: 97.8038% ( 10) 00:13:26.825 18.814 - 18.917: 97.8958% ( 7) 00:13:26.825 18.917 - 19.020: 98.0142% ( 9) 00:13:26.826 19.020 - 19.123: 98.1326% ( 9) 00:13:26.826 19.123 - 19.226: 98.1983% ( 5) 00:13:26.826 19.226 - 19.329: 98.2509% ( 4) 00:13:26.826 19.329 - 19.431: 98.3167% ( 5) 00:13:26.826 19.431 - 19.534: 98.3298% ( 1) 00:13:26.826 19.534 - 19.637: 98.3693% ( 3) 00:13:26.826 19.637 - 19.740: 98.4219% ( 4) 00:13:26.826 19.740 - 19.843: 98.4745% ( 4) 00:13:26.826 19.843 - 19.945: 98.5139% ( 3) 00:13:26.826 19.945 - 20.048: 98.6191% ( 8) 00:13:26.826 20.048 - 20.151: 98.6849% ( 5) 00:13:26.826 20.151 - 20.254: 98.7112% ( 2) 00:13:26.826 20.254 - 20.357: 98.7638% ( 4) 00:13:26.826 20.357 - 20.459: 98.8427% ( 6) 00:13:26.826 20.459 - 20.562: 98.8559% ( 1) 00:13:26.826 20.768 - 20.871: 98.8822% ( 2) 00:13:26.826 20.871 - 20.973: 98.9085% ( 2) 00:13:26.826 20.973 - 21.076: 98.9479% ( 3) 00:13:26.826 21.179 - 21.282: 98.9742% ( 2) 00:13:26.826 21.385 - 21.488: 98.9874% ( 1) 00:13:26.826 21.590 - 21.693: 99.0005% ( 1) 00:13:26.826 21.899 - 22.002: 99.0137% ( 1) 00:13:26.826 22.618 - 22.721: 99.0268% ( 1) 00:13:26.826 23.030 - 23.133: 99.0531% ( 2) 00:13:26.826 23.133 - 23.235: 99.0794% ( 2) 00:13:26.826 23.235 - 23.338: 99.1189% ( 3) 00:13:26.826 23.647 - 23.749: 99.1320% ( 1) 00:13:26.826 23.955 - 24.058: 99.1452% ( 1) 00:13:26.826 24.161 - 24.263: 99.1583% ( 1) 00:13:26.826 24.469 - 24.572: 99.1715% ( 1) 00:13:26.826 24.778 - 24.880: 99.1846% ( 1) 00:13:26.826 24.983 - 25.086: 99.1978% ( 1) 00:13:26.826 25.086 - 25.189: 99.2109% ( 1) 00:13:26.826 25.189 - 25.292: 99.2241% ( 1) 00:13:26.826 25.292 - 25.394: 99.2372% ( 1) 00:13:26.826 25.497 - 25.600: 99.2898% ( 4) 00:13:26.826 25.703 - 25.806: 99.3293% ( 3) 00:13:26.826 25.806 - 25.908: 99.3425% ( 1) 00:13:26.826 25.908 - 26.011: 99.3951% ( 4) 00:13:26.826 26.011 - 26.114: 99.4214% ( 2) 00:13:26.826 26.114 - 26.217: 99.4477% ( 2) 00:13:26.826 26.217 - 26.320: 99.4871% ( 3) 00:13:26.826 26.320 - 26.525: 99.5660% ( 6) 00:13:26.826 26.525 - 26.731: 99.5792% ( 1) 00:13:26.826 26.731 - 26.937: 99.6186% ( 3) 00:13:26.826 26.937 - 27.142: 99.6318% ( 1) 00:13:26.826 27.348 - 27.553: 99.6581% ( 2) 00:13:26.826 28.376 - 28.582: 99.6712% ( 1) 00:13:26.826 29.198 - 29.404: 99.6844% ( 1) 00:13:26.826 29.610 - 29.815: 99.7238% ( 3) 00:13:26.826 29.815 - 30.021: 99.7370% ( 1) 00:13:26.826 30.021 - 30.227: 99.7633% ( 2) 00:13:26.826 30.227 - 30.432: 99.7896% ( 2) 00:13:26.826 30.432 - 30.638: 99.8159% ( 2) 00:13:26.826 30.638 - 30.843: 99.8290% ( 1) 00:13:26.826 30.843 - 31.049: 99.8422% ( 1) 00:13:26.826 31.255 - 31.460: 99.8685% ( 2) 00:13:26.826 31.871 - 32.077: 99.8816% ( 1) 00:13:26.826 32.694 - 32.900: 99.8948% ( 1) 00:13:26.826 32.900 - 33.105: 99.9079% ( 1) 00:13:26.826 34.339 - 34.545: 99.9211% ( 1) 00:13:26.826 42.564 - 42.769: 99.9342% ( 1) 00:13:26.826 46.676 - 46.882: 99.9605% ( 2) 00:13:26.826 47.088 - 47.293: 99.9737% ( 1) 00:13:26.826 47.704 - 47.910: 99.9868% ( 1) 00:13:26.826 99.110 - 99.521: 100.0000% ( 1) 00:13:26.826 00:13:26.826 Complete histogram 00:13:26.826 ================== 00:13:26.826 Range in us Cumulative Count 00:13:26.826 7.659 - 7.711: 1.1310% ( 86) 00:13:26.826 7.711 - 7.762: 12.7433% ( 883) 00:13:26.826 7.762 - 7.814: 35.6786% ( 1744) 00:13:26.826 7.814 - 7.865: 55.9968% ( 1545) 00:13:26.826 7.865 - 7.916: 65.3866% ( 714) 00:13:26.826 7.916 - 7.968: 71.2125% ( 443) 00:13:26.826 7.968 - 8.019: 76.1441% ( 375) 00:13:26.826 8.019 - 8.071: 79.9448% ( 289) 00:13:26.826 8.071 - 8.122: 81.9832% ( 155) 00:13:26.826 8.122 - 8.173: 83.9558% ( 150) 00:13:26.826 8.173 - 8.225: 86.7833% ( 215) 00:13:26.826 8.225 - 8.276: 88.7691% ( 151) 00:13:26.826 8.276 - 8.328: 90.0053% ( 94) 00:13:26.826 8.328 - 8.379: 90.6891% ( 52) 00:13:26.826 8.379 - 8.431: 91.8464% ( 88) 00:13:26.826 8.431 - 8.482: 92.8590% ( 77) 00:13:26.826 8.482 - 8.533: 93.8848% ( 78) 00:13:26.826 8.533 - 8.585: 94.7528% ( 66) 00:13:26.826 8.585 - 8.636: 95.5024% ( 57) 00:13:26.826 8.636 - 8.688: 95.9232% ( 32) 00:13:26.826 8.688 - 8.739: 96.2125% ( 22) 00:13:26.826 8.739 - 8.790: 96.4229% ( 16) 00:13:26.826 8.790 - 8.842: 96.5281% ( 8) 00:13:26.826 8.842 - 8.893: 96.7386% ( 16) 00:13:26.826 8.893 - 8.945: 96.8306% ( 7) 00:13:26.826 8.945 - 8.996: 96.9621% ( 10) 00:13:26.826 8.996 - 9.047: 97.0673% ( 8) 00:13:26.826 9.047 - 9.099: 97.1331% ( 5) 00:13:26.826 9.099 - 9.150: 97.1988% ( 5) 00:13:26.826 9.150 - 9.202: 97.2120% ( 1) 00:13:26.826 9.202 - 9.253: 97.2514% ( 3) 00:13:26.826 9.253 - 9.304: 97.2646% ( 1) 00:13:26.826 9.304 - 9.356: 97.2777% ( 1) 00:13:26.826 9.356 - 9.407: 97.3172% ( 3) 00:13:26.826 9.407 - 9.459: 97.3567% ( 3) 00:13:26.826 9.459 - 9.510: 97.4093% ( 4) 00:13:26.826 9.510 - 9.561: 97.4356% ( 2) 00:13:26.826 9.613 - 9.664: 97.4487% ( 1) 00:13:26.826 9.664 - 9.716: 97.4882% ( 3) 00:13:26.826 9.716 - 9.767: 97.5013% ( 1) 00:13:26.826 9.767 - 9.818: 97.5276% ( 2) 00:13:26.826 9.818 - 9.870: 97.5539% ( 2) 00:13:26.826 9.870 - 9.921: 97.5802% ( 2) 00:13:26.826 9.973 - 10.024: 97.5934% ( 1) 00:13:26.826 10.024 - 10.076: 97.6460% ( 4) 00:13:26.826 10.076 - 10.127: 97.6591% ( 1) 00:13:26.826 10.538 - 10.590: 97.6723% ( 1) 00:13:26.826 10.641 - 10.692: 97.6854% ( 1) 00:13:26.826 11.001 - 11.052: 97.7117% ( 2) 00:13:26.826 11.155 - 11.206: 97.7249% ( 1) 00:13:26.826 11.206 - 11.258: 97.7380% ( 1) 00:13:26.826 11.618 - 11.669: 97.7512% ( 1) 00:13:26.826 11.669 - 11.720: 97.7643% ( 1) 00:13:26.826 12.800 - 12.851: 97.7775% ( 1) 00:13:26.826 13.006 - 13.057: 97.7906% ( 1) 00:13:26.826 13.057 - 13.108: 97.8169% ( 2) 00:13:26.826 13.108 - 13.160: 97.8432% ( 2) 00:13:26.826 13.160 - 13.263: 97.8827% ( 3) 00:13:26.826 13.263 - 13.365: 98.0142% ( 10) 00:13:26.826 13.365 - 13.468: 98.0537% ( 3) 00:13:26.826 13.468 - 13.571: 98.1326% ( 6) 00:13:26.826 13.571 - 13.674: 98.2378% ( 8) 00:13:26.826 13.674 - 13.777: 98.3298% ( 7) 00:13:26.826 13.777 - 13.880: 98.3693% ( 3) 00:13:26.826 13.880 - 13.982: 98.4482% ( 6) 00:13:26.826 13.982 - 14.085: 98.5139% ( 5) 00:13:26.826 14.085 - 14.188: 98.5928% ( 6) 00:13:26.826 14.188 - 14.291: 98.6191% ( 2) 00:13:26.826 14.291 - 14.394: 98.6454% ( 2) 00:13:26.826 14.394 - 14.496: 98.6718% ( 2) 00:13:26.826 14.599 - 14.702: 98.7112% ( 3) 00:13:26.826 15.113 - 15.216: 98.7375% ( 2) 00:13:26.826 15.730 - 15.833: 98.7507% ( 1) 00:13:26.826 15.936 - 16.039: 98.7638% ( 1) 00:13:26.826 16.244 - 16.347: 98.7770% ( 1) 00:13:26.826 16.450 - 16.553: 98.7901% ( 1) 00:13:26.826 16.553 - 16.655: 98.8033% ( 1) 00:13:26.826 16.758 - 16.861: 98.8164% ( 1) 00:13:26.826 16.861 - 16.964: 98.8427% ( 2) 00:13:26.826 17.067 - 17.169: 98.8690% ( 2) 00:13:26.826 17.272 - 17.375: 98.9216% ( 4) 00:13:26.826 17.478 - 17.581: 98.9348% ( 1) 00:13:26.826 17.581 - 17.684: 98.9479% ( 1) 00:13:26.826 17.684 - 17.786: 98.9611% ( 1) 00:13:26.826 17.786 - 17.889: 98.9874% ( 2) 00:13:26.826 17.889 - 17.992: 99.0005% ( 1) 00:13:26.826 17.992 - 18.095: 99.0137% ( 1) 00:13:26.826 18.095 - 18.198: 99.0268% ( 1) 00:13:26.826 18.198 - 18.300: 99.0400% ( 1) 00:13:26.826 18.403 - 18.506: 99.0531% ( 1) 00:13:26.826 18.506 - 18.609: 99.0663% ( 1) 00:13:26.826 18.609 - 18.712: 99.0926% ( 2) 00:13:26.826 18.712 - 18.814: 99.1057% ( 1) 00:13:26.826 18.814 - 18.917: 99.1189% ( 1) 00:13:26.826 18.917 - 19.020: 99.1452% ( 2) 00:13:26.826 19.123 - 19.226: 99.1583% ( 1) 00:13:26.826 19.226 - 19.329: 99.1846% ( 2) 00:13:26.826 19.637 - 19.740: 99.1978% ( 1) 00:13:26.826 19.945 - 20.048: 99.2109% ( 1) 00:13:26.826 20.151 - 20.254: 99.2241% ( 1) 00:13:26.826 20.357 - 20.459: 99.2635% ( 3) 00:13:26.826 20.459 - 20.562: 99.3161% ( 4) 00:13:26.826 20.562 - 20.665: 99.3556% ( 3) 00:13:26.826 20.665 - 20.768: 99.4871% ( 10) 00:13:26.826 20.768 - 20.871: 99.5266% ( 3) 00:13:26.826 20.871 - 20.973: 99.5923% ( 5) 00:13:26.826 21.076 - 21.179: 99.6055% ( 1) 00:13:26.826 21.179 - 21.282: 99.6186% ( 1) 00:13:26.826 21.282 - 21.385: 99.6318% ( 1) 00:13:26.826 22.824 - 22.927: 99.6449% ( 1) 00:13:26.826 23.133 - 23.235: 99.6581% ( 1) 00:13:26.826 23.544 - 23.647: 99.6712% ( 1) 00:13:26.826 23.647 - 23.749: 99.6844% ( 1) 00:13:26.826 24.366 - 24.469: 99.6975% ( 1) 00:13:26.826 24.572 - 24.675: 99.7107% ( 1) 00:13:26.826 24.675 - 24.778: 99.7238% ( 1) 00:13:26.826 24.778 - 24.880: 99.7370% ( 1) 00:13:26.826 24.880 - 24.983: 99.7501% ( 1) 00:13:26.826 24.983 - 25.086: 99.7764% ( 2) 00:13:26.826 25.086 - 25.189: 99.8159% ( 3) 00:13:26.826 25.292 - 25.394: 99.8553% ( 3) 00:13:26.826 25.806 - 25.908: 99.8685% ( 1) 00:13:26.826 26.114 - 26.217: 99.8948% ( 2) 00:13:26.826 26.217 - 26.320: 99.9079% ( 1) 00:13:26.826 30.432 - 30.638: 99.9211% ( 1) 00:13:26.826 32.077 - 32.283: 99.9342% ( 1) 00:13:26.826 39.274 - 39.480: 99.9474% ( 1) 00:13:26.826 44.826 - 45.031: 99.9605% ( 1) 00:13:26.826 61.687 - 62.098: 99.9737% ( 1) 00:13:26.826 63.743 - 64.154: 99.9868% ( 1) 00:13:26.826 103.634 - 104.045: 100.0000% ( 1) 00:13:26.826 00:13:26.826 ************************************ 00:13:26.826 END TEST nvme_overhead 00:13:26.826 ************************************ 00:13:26.826 00:13:26.826 real 0m1.279s 00:13:26.826 user 0m1.093s 00:13:26.826 sys 0m0.142s 00:13:26.826 11:24:08 nvme.nvme_overhead -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:26.826 11:24:08 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:13:26.826 11:24:08 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:13:26.826 11:24:08 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:13:26.826 11:24:08 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:26.826 11:24:08 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:26.826 ************************************ 00:13:26.826 START TEST nvme_arbitration 00:13:26.826 ************************************ 00:13:26.826 11:24:08 nvme.nvme_arbitration -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:13:31.011 Initializing NVMe Controllers 00:13:31.011 Attached to 0000:00:10.0 00:13:31.011 Attached to 0000:00:11.0 00:13:31.011 Attached to 0000:00:13.0 00:13:31.011 Attached to 0000:00:12.0 00:13:31.012 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:13:31.012 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:13:31.012 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:13:31.012 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:13:31.012 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:13:31.012 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:13:31.012 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:13:31.012 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:13:31.012 Initialization complete. Launching workers. 00:13:31.012 Starting thread on core 1 with urgent priority queue 00:13:31.012 Starting thread on core 2 with urgent priority queue 00:13:31.012 Starting thread on core 3 with urgent priority queue 00:13:31.012 Starting thread on core 0 with urgent priority queue 00:13:31.012 QEMU NVMe Ctrl (12340 ) core 0: 533.33 IO/s 187.50 secs/100000 ios 00:13:31.012 QEMU NVMe Ctrl (12342 ) core 0: 533.33 IO/s 187.50 secs/100000 ios 00:13:31.012 QEMU NVMe Ctrl (12341 ) core 1: 554.67 IO/s 180.29 secs/100000 ios 00:13:31.012 QEMU NVMe Ctrl (12342 ) core 1: 554.67 IO/s 180.29 secs/100000 ios 00:13:31.012 QEMU NVMe Ctrl (12343 ) core 2: 554.67 IO/s 180.29 secs/100000 ios 00:13:31.012 QEMU NVMe Ctrl (12342 ) core 3: 576.00 IO/s 173.61 secs/100000 ios 00:13:31.012 ======================================================== 00:13:31.012 00:13:31.012 00:13:31.012 real 0m3.453s 00:13:31.012 user 0m9.431s 00:13:31.012 sys 0m0.180s 00:13:31.012 11:24:11 nvme.nvme_arbitration -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:31.012 ************************************ 00:13:31.012 END TEST nvme_arbitration 00:13:31.012 ************************************ 00:13:31.012 11:24:11 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:13:31.012 11:24:11 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:13:31.012 11:24:11 nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:31.012 11:24:11 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:31.012 11:24:11 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:31.012 ************************************ 00:13:31.012 START TEST nvme_single_aen 00:13:31.012 ************************************ 00:13:31.012 11:24:11 nvme.nvme_single_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:13:31.012 Asynchronous Event Request test 00:13:31.012 Attached to 0000:00:10.0 00:13:31.012 Attached to 0000:00:11.0 00:13:31.012 Attached to 0000:00:13.0 00:13:31.012 Attached to 0000:00:12.0 00:13:31.012 Reset controller to setup AER completions for this process 00:13:31.012 Registering asynchronous event callbacks... 00:13:31.012 Getting orig temperature thresholds of all controllers 00:13:31.012 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:31.012 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:31.012 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:31.012 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:31.012 Setting all controllers temperature threshold low to trigger AER 00:13:31.012 Waiting for all controllers temperature threshold to be set lower 00:13:31.012 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:31.012 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:13:31.012 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:31.012 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:13:31.012 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:31.012 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:13:31.012 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:31.012 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:13:31.012 Waiting for all controllers to trigger AER and reset threshold 00:13:31.012 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:31.012 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:31.012 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:31.012 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:31.012 Cleaning up... 00:13:31.012 00:13:31.012 real 0m0.301s 00:13:31.012 user 0m0.091s 00:13:31.012 sys 0m0.160s 00:13:31.012 ************************************ 00:13:31.012 END TEST nvme_single_aen 00:13:31.012 ************************************ 00:13:31.012 11:24:12 nvme.nvme_single_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:31.012 11:24:12 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:13:31.012 11:24:12 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:13:31.012 11:24:12 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:31.012 11:24:12 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:31.012 11:24:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:31.012 ************************************ 00:13:31.012 START TEST nvme_doorbell_aers 00:13:31.012 ************************************ 00:13:31.012 11:24:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1125 -- # nvme_doorbell_aers 00:13:31.012 11:24:12 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:13:31.012 11:24:12 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:13:31.012 11:24:12 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:13:31.012 11:24:12 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:13:31.012 11:24:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # bdfs=() 00:13:31.012 11:24:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # local bdfs 00:13:31.012 11:24:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:13:31.012 11:24:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:31.012 11:24:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:13:31.012 11:24:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:13:31.012 11:24:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:13:31.012 11:24:12 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:13:31.012 11:24:12 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:13:31.271 [2024-10-07 11:24:12.757849] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65204) is not found. Dropping the request. 00:13:41.333 Executing: test_write_invalid_db 00:13:41.333 Waiting for AER completion... 00:13:41.333 Failure: test_write_invalid_db 00:13:41.333 00:13:41.333 Executing: test_invalid_db_write_overflow_sq 00:13:41.333 Waiting for AER completion... 00:13:41.333 Failure: test_invalid_db_write_overflow_sq 00:13:41.333 00:13:41.333 Executing: test_invalid_db_write_overflow_cq 00:13:41.333 Waiting for AER completion... 00:13:41.333 Failure: test_invalid_db_write_overflow_cq 00:13:41.333 00:13:41.333 11:24:22 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:13:41.333 11:24:22 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:13:41.333 [2024-10-07 11:24:22.832974] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65204) is not found. Dropping the request. 00:13:51.300 Executing: test_write_invalid_db 00:13:51.300 Waiting for AER completion... 00:13:51.300 Failure: test_write_invalid_db 00:13:51.300 00:13:51.300 Executing: test_invalid_db_write_overflow_sq 00:13:51.300 Waiting for AER completion... 00:13:51.300 Failure: test_invalid_db_write_overflow_sq 00:13:51.300 00:13:51.300 Executing: test_invalid_db_write_overflow_cq 00:13:51.300 Waiting for AER completion... 00:13:51.300 Failure: test_invalid_db_write_overflow_cq 00:13:51.300 00:13:51.300 11:24:32 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:13:51.300 11:24:32 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:13:51.300 [2024-10-07 11:24:32.912674] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65204) is not found. Dropping the request. 00:14:01.295 Executing: test_write_invalid_db 00:14:01.295 Waiting for AER completion... 00:14:01.295 Failure: test_write_invalid_db 00:14:01.295 00:14:01.295 Executing: test_invalid_db_write_overflow_sq 00:14:01.295 Waiting for AER completion... 00:14:01.295 Failure: test_invalid_db_write_overflow_sq 00:14:01.295 00:14:01.295 Executing: test_invalid_db_write_overflow_cq 00:14:01.295 Waiting for AER completion... 00:14:01.295 Failure: test_invalid_db_write_overflow_cq 00:14:01.295 00:14:01.295 11:24:42 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:14:01.295 11:24:42 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:14:01.295 [2024-10-07 11:24:42.967425] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65204) is not found. Dropping the request. 00:14:11.272 Executing: test_write_invalid_db 00:14:11.272 Waiting for AER completion... 00:14:11.272 Failure: test_write_invalid_db 00:14:11.272 00:14:11.272 Executing: test_invalid_db_write_overflow_sq 00:14:11.272 Waiting for AER completion... 00:14:11.272 Failure: test_invalid_db_write_overflow_sq 00:14:11.272 00:14:11.272 Executing: test_invalid_db_write_overflow_cq 00:14:11.272 Waiting for AER completion... 00:14:11.272 Failure: test_invalid_db_write_overflow_cq 00:14:11.272 00:14:11.272 00:14:11.272 real 0m40.333s 00:14:11.272 user 0m28.475s 00:14:11.272 sys 0m11.468s 00:14:11.272 11:24:52 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:11.272 ************************************ 00:14:11.272 END TEST nvme_doorbell_aers 00:14:11.272 ************************************ 00:14:11.272 11:24:52 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:14:11.272 11:24:52 nvme -- nvme/nvme.sh@97 -- # uname 00:14:11.272 11:24:52 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:14:11.272 11:24:52 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:14:11.272 11:24:52 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:14:11.272 11:24:52 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:11.272 11:24:52 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:11.272 ************************************ 00:14:11.272 START TEST nvme_multi_aen 00:14:11.272 ************************************ 00:14:11.272 11:24:52 nvme.nvme_multi_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:14:11.530 [2024-10-07 11:24:53.039248] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65204) is not found. Dropping the request. 00:14:11.530 [2024-10-07 11:24:53.039577] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65204) is not found. Dropping the request. 00:14:11.530 [2024-10-07 11:24:53.039719] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65204) is not found. Dropping the request. 00:14:11.530 [2024-10-07 11:24:53.041530] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65204) is not found. Dropping the request. 00:14:11.530 [2024-10-07 11:24:53.041721] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65204) is not found. Dropping the request. 00:14:11.530 [2024-10-07 11:24:53.041866] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65204) is not found. Dropping the request. 00:14:11.530 [2024-10-07 11:24:53.043590] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65204) is not found. Dropping the request. 00:14:11.530 [2024-10-07 11:24:53.043783] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65204) is not found. Dropping the request. 00:14:11.530 [2024-10-07 11:24:53.043879] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65204) is not found. Dropping the request. 00:14:11.530 [2024-10-07 11:24:53.045345] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65204) is not found. Dropping the request. 00:14:11.530 [2024-10-07 11:24:53.045384] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65204) is not found. Dropping the request. 00:14:11.530 [2024-10-07 11:24:53.045398] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65204) is not found. Dropping the request. 00:14:11.530 Child process pid: 65725 00:14:11.788 [Child] Asynchronous Event Request test 00:14:11.788 [Child] Attached to 0000:00:10.0 00:14:11.788 [Child] Attached to 0000:00:11.0 00:14:11.788 [Child] Attached to 0000:00:13.0 00:14:11.788 [Child] Attached to 0000:00:12.0 00:14:11.788 [Child] Registering asynchronous event callbacks... 00:14:11.788 [Child] Getting orig temperature thresholds of all controllers 00:14:11.788 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:11.788 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:11.788 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:11.788 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:11.788 [Child] Waiting for all controllers to trigger AER and reset threshold 00:14:11.788 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:11.788 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:11.788 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:11.788 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:11.788 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:11.788 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:11.788 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:11.788 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:11.788 [Child] Cleaning up... 00:14:11.788 Asynchronous Event Request test 00:14:11.788 Attached to 0000:00:10.0 00:14:11.788 Attached to 0000:00:11.0 00:14:11.788 Attached to 0000:00:13.0 00:14:11.788 Attached to 0000:00:12.0 00:14:11.788 Reset controller to setup AER completions for this process 00:14:11.788 Registering asynchronous event callbacks... 00:14:11.788 Getting orig temperature thresholds of all controllers 00:14:11.788 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:11.788 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:11.788 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:11.788 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:11.788 Setting all controllers temperature threshold low to trigger AER 00:14:11.788 Waiting for all controllers temperature threshold to be set lower 00:14:11.788 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:11.788 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:14:11.788 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:11.788 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:14:11.788 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:11.788 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:14:11.788 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:11.788 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:14:11.788 Waiting for all controllers to trigger AER and reset threshold 00:14:11.789 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:11.789 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:11.789 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:11.789 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:11.789 Cleaning up... 00:14:11.789 00:14:11.789 real 0m0.670s 00:14:11.789 user 0m0.216s 00:14:11.789 sys 0m0.339s 00:14:11.789 11:24:53 nvme.nvme_multi_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:11.789 11:24:53 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:14:11.789 ************************************ 00:14:11.789 END TEST nvme_multi_aen 00:14:11.789 ************************************ 00:14:12.047 11:24:53 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:14:12.047 11:24:53 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:14:12.047 11:24:53 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:12.047 11:24:53 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:12.047 ************************************ 00:14:12.047 START TEST nvme_startup 00:14:12.047 ************************************ 00:14:12.047 11:24:53 nvme.nvme_startup -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:14:12.305 Initializing NVMe Controllers 00:14:12.305 Attached to 0000:00:10.0 00:14:12.305 Attached to 0000:00:11.0 00:14:12.305 Attached to 0000:00:13.0 00:14:12.305 Attached to 0000:00:12.0 00:14:12.305 Initialization complete. 00:14:12.305 Time used:205896.469 (us). 00:14:12.305 00:14:12.305 real 0m0.314s 00:14:12.305 user 0m0.094s 00:14:12.305 sys 0m0.172s 00:14:12.305 ************************************ 00:14:12.305 END TEST nvme_startup 00:14:12.305 ************************************ 00:14:12.305 11:24:53 nvme.nvme_startup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:12.305 11:24:53 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:14:12.305 11:24:53 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:14:12.305 11:24:53 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:12.305 11:24:53 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:12.305 11:24:53 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:12.305 ************************************ 00:14:12.305 START TEST nvme_multi_secondary 00:14:12.305 ************************************ 00:14:12.305 11:24:53 nvme.nvme_multi_secondary -- common/autotest_common.sh@1125 -- # nvme_multi_secondary 00:14:12.305 11:24:53 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=65781 00:14:12.305 11:24:53 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:14:12.305 11:24:53 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=65782 00:14:12.305 11:24:53 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:14:12.305 11:24:53 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:14:15.663 Initializing NVMe Controllers 00:14:15.663 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:15.663 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:14:15.663 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:14:15.663 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:14:15.663 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:14:15.663 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:14:15.663 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:14:15.663 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:14:15.663 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:14:15.663 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:14:15.663 Initialization complete. Launching workers. 00:14:15.663 ======================================================== 00:14:15.663 Latency(us) 00:14:15.663 Device Information : IOPS MiB/s Average min max 00:14:15.663 PCIE (0000:00:10.0) NSID 1 from core 1: 4741.48 18.52 3371.88 1456.94 8063.86 00:14:15.663 PCIE (0000:00:11.0) NSID 1 from core 1: 4741.48 18.52 3373.95 1338.71 7798.05 00:14:15.663 PCIE (0000:00:13.0) NSID 1 from core 1: 4741.48 18.52 3374.46 1240.82 7703.76 00:14:15.663 PCIE (0000:00:12.0) NSID 1 from core 1: 4741.48 18.52 3374.52 1405.01 7865.18 00:14:15.663 PCIE (0000:00:12.0) NSID 2 from core 1: 4741.48 18.52 3374.73 1416.73 9577.07 00:14:15.663 PCIE (0000:00:12.0) NSID 3 from core 1: 4741.48 18.52 3374.85 1406.19 8310.29 00:14:15.663 ======================================================== 00:14:15.663 Total : 28448.87 111.13 3374.06 1240.82 9577.07 00:14:15.663 00:14:15.922 Initializing NVMe Controllers 00:14:15.922 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:15.922 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:14:15.922 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:14:15.922 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:14:15.922 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:14:15.922 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:14:15.922 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:14:15.922 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:14:15.922 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:14:15.922 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:14:15.922 Initialization complete. Launching workers. 00:14:15.922 ======================================================== 00:14:15.922 Latency(us) 00:14:15.922 Device Information : IOPS MiB/s Average min max 00:14:15.922 PCIE (0000:00:10.0) NSID 1 from core 2: 3153.99 12.32 5071.09 1210.99 13245.91 00:14:15.922 PCIE (0000:00:11.0) NSID 1 from core 2: 3153.99 12.32 5072.33 1354.66 13496.63 00:14:15.922 PCIE (0000:00:13.0) NSID 1 from core 2: 3153.99 12.32 5072.16 1221.19 12756.33 00:14:15.922 PCIE (0000:00:12.0) NSID 1 from core 2: 3153.99 12.32 5071.94 1264.87 13263.27 00:14:15.922 PCIE (0000:00:12.0) NSID 2 from core 2: 3153.99 12.32 5068.44 1211.35 13923.30 00:14:15.922 PCIE (0000:00:12.0) NSID 3 from core 2: 3153.99 12.32 5064.37 797.08 13004.29 00:14:15.922 ======================================================== 00:14:15.922 Total : 18923.93 73.92 5070.06 797.08 13923.30 00:14:15.922 00:14:15.922 11:24:57 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 65781 00:14:17.828 Initializing NVMe Controllers 00:14:17.828 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:17.828 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:14:17.828 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:14:17.828 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:14:17.828 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:14:17.828 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:14:17.828 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:14:17.828 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:14:17.828 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:14:17.828 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:14:17.828 Initialization complete. Launching workers. 00:14:17.828 ======================================================== 00:14:17.828 Latency(us) 00:14:17.828 Device Information : IOPS MiB/s Average min max 00:14:17.828 PCIE (0000:00:10.0) NSID 1 from core 0: 7793.60 30.44 2051.26 915.19 8784.19 00:14:17.828 PCIE (0000:00:11.0) NSID 1 from core 0: 7793.60 30.44 2052.44 928.92 8710.42 00:14:17.828 PCIE (0000:00:13.0) NSID 1 from core 0: 7793.60 30.44 2052.41 860.22 8706.23 00:14:17.828 PCIE (0000:00:12.0) NSID 1 from core 0: 7793.60 30.44 2052.36 856.14 9730.97 00:14:17.828 PCIE (0000:00:12.0) NSID 2 from core 0: 7793.60 30.44 2052.31 811.51 10381.29 00:14:17.828 PCIE (0000:00:12.0) NSID 3 from core 0: 7793.60 30.44 2052.26 759.64 9750.83 00:14:17.828 ======================================================== 00:14:17.828 Total : 46761.61 182.66 2052.17 759.64 10381.29 00:14:17.828 00:14:17.828 11:24:59 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 65782 00:14:17.828 11:24:59 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=65852 00:14:17.828 11:24:59 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:14:17.828 11:24:59 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=65853 00:14:17.828 11:24:59 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:14:17.828 11:24:59 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:14:21.127 Initializing NVMe Controllers 00:14:21.127 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:21.127 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:14:21.127 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:14:21.127 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:14:21.127 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:14:21.127 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:14:21.127 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:14:21.127 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:14:21.128 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:14:21.128 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:14:21.128 Initialization complete. Launching workers. 00:14:21.128 ======================================================== 00:14:21.128 Latency(us) 00:14:21.128 Device Information : IOPS MiB/s Average min max 00:14:21.128 PCIE (0000:00:10.0) NSID 1 from core 1: 5148.84 20.11 3105.19 899.30 7690.29 00:14:21.128 PCIE (0000:00:11.0) NSID 1 from core 1: 5148.84 20.11 3107.26 945.22 7223.73 00:14:21.128 PCIE (0000:00:13.0) NSID 1 from core 1: 5148.84 20.11 3107.51 953.13 7889.27 00:14:21.128 PCIE (0000:00:12.0) NSID 1 from core 1: 5148.84 20.11 3107.82 963.49 8904.48 00:14:21.128 PCIE (0000:00:12.0) NSID 2 from core 1: 5148.84 20.11 3107.96 947.32 9019.66 00:14:21.128 PCIE (0000:00:12.0) NSID 3 from core 1: 5154.17 20.13 3105.10 943.54 10234.25 00:14:21.128 ======================================================== 00:14:21.128 Total : 30898.36 120.70 3106.81 899.30 10234.25 00:14:21.128 00:14:21.128 Initializing NVMe Controllers 00:14:21.128 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:21.128 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:14:21.128 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:14:21.128 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:14:21.128 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:14:21.128 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:14:21.128 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:14:21.128 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:14:21.128 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:14:21.128 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:14:21.128 Initialization complete. Launching workers. 00:14:21.128 ======================================================== 00:14:21.128 Latency(us) 00:14:21.128 Device Information : IOPS MiB/s Average min max 00:14:21.128 PCIE (0000:00:10.0) NSID 1 from core 0: 5098.31 19.92 3135.82 987.31 8501.27 00:14:21.128 PCIE (0000:00:11.0) NSID 1 from core 0: 5098.31 19.92 3137.52 996.90 8109.79 00:14:21.128 PCIE (0000:00:13.0) NSID 1 from core 0: 5098.31 19.92 3137.46 976.68 7671.97 00:14:21.128 PCIE (0000:00:12.0) NSID 1 from core 0: 5098.31 19.92 3137.41 936.23 9517.52 00:14:21.128 PCIE (0000:00:12.0) NSID 2 from core 0: 5098.31 19.92 3137.34 925.59 10669.13 00:14:21.128 PCIE (0000:00:12.0) NSID 3 from core 0: 5098.31 19.92 3137.29 863.92 9503.79 00:14:21.128 ======================================================== 00:14:21.128 Total : 30589.84 119.49 3137.14 863.92 10669.13 00:14:21.128 00:14:23.710 Initializing NVMe Controllers 00:14:23.710 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:23.710 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:14:23.710 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:14:23.710 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:14:23.710 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:14:23.710 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:14:23.710 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:14:23.710 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:14:23.710 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:14:23.710 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:14:23.710 Initialization complete. Launching workers. 00:14:23.710 ======================================================== 00:14:23.710 Latency(us) 00:14:23.710 Device Information : IOPS MiB/s Average min max 00:14:23.710 PCIE (0000:00:10.0) NSID 1 from core 2: 3058.30 11.95 5224.88 1111.44 18047.27 00:14:23.710 PCIE (0000:00:11.0) NSID 1 from core 2: 3058.30 11.95 5227.17 1099.73 18644.88 00:14:23.710 PCIE (0000:00:13.0) NSID 1 from core 2: 3058.30 11.95 5231.26 1141.47 20300.79 00:14:23.710 PCIE (0000:00:12.0) NSID 1 from core 2: 3058.30 11.95 5230.88 1177.69 20571.39 00:14:23.710 PCIE (0000:00:12.0) NSID 2 from core 2: 3058.30 11.95 5231.04 1157.47 19014.68 00:14:23.710 PCIE (0000:00:12.0) NSID 3 from core 2: 3058.30 11.95 5230.93 1141.44 17054.24 00:14:23.710 ======================================================== 00:14:23.710 Total : 18349.81 71.68 5229.36 1099.73 20571.39 00:14:23.710 00:14:23.710 11:25:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 65852 00:14:23.710 ************************************ 00:14:23.710 END TEST nvme_multi_secondary 00:14:23.710 ************************************ 00:14:23.710 11:25:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 65853 00:14:23.710 00:14:23.710 real 0m10.992s 00:14:23.710 user 0m18.555s 00:14:23.710 sys 0m1.085s 00:14:23.710 11:25:04 nvme.nvme_multi_secondary -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:23.710 11:25:04 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:14:23.710 11:25:04 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:14:23.710 11:25:04 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:14:23.710 11:25:04 nvme -- common/autotest_common.sh@1089 -- # [[ -e /proc/64788 ]] 00:14:23.710 11:25:04 nvme -- common/autotest_common.sh@1090 -- # kill 64788 00:14:23.710 11:25:04 nvme -- common/autotest_common.sh@1091 -- # wait 64788 00:14:23.710 [2024-10-07 11:25:04.962377] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65724) is not found. Dropping the request. 00:14:23.710 [2024-10-07 11:25:04.962895] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65724) is not found. Dropping the request. 00:14:23.710 [2024-10-07 11:25:04.962989] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65724) is not found. Dropping the request. 00:14:23.710 [2024-10-07 11:25:04.963045] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65724) is not found. Dropping the request. 00:14:23.710 [2024-10-07 11:25:04.969472] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65724) is not found. Dropping the request. 00:14:23.710 [2024-10-07 11:25:04.969586] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65724) is not found. Dropping the request. 00:14:23.710 [2024-10-07 11:25:04.969630] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65724) is not found. Dropping the request. 00:14:23.710 [2024-10-07 11:25:04.969677] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65724) is not found. Dropping the request. 00:14:23.710 [2024-10-07 11:25:04.975106] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65724) is not found. Dropping the request. 00:14:23.710 [2024-10-07 11:25:04.975179] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65724) is not found. Dropping the request. 00:14:23.710 [2024-10-07 11:25:04.975207] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65724) is not found. Dropping the request. 00:14:23.710 [2024-10-07 11:25:04.975238] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65724) is not found. Dropping the request. 00:14:23.710 [2024-10-07 11:25:04.979775] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65724) is not found. Dropping the request. 00:14:23.710 [2024-10-07 11:25:04.979851] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65724) is not found. Dropping the request. 00:14:23.710 [2024-10-07 11:25:04.979895] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65724) is not found. Dropping the request. 00:14:23.710 [2024-10-07 11:25:04.979945] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65724) is not found. Dropping the request. 00:14:23.710 [2024-10-07 11:25:05.221050] nvme_cuse.c:1023:cuse_thread: *NOTICE*: Cuse thread exited. 00:14:23.711 11:25:05 nvme -- common/autotest_common.sh@1093 -- # rm -f /var/run/spdk_stub0 00:14:23.711 11:25:05 nvme -- common/autotest_common.sh@1097 -- # echo 2 00:14:23.711 11:25:05 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:14:23.711 11:25:05 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:23.711 11:25:05 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:23.711 11:25:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:23.711 ************************************ 00:14:23.711 START TEST bdev_nvme_reset_stuck_adm_cmd 00:14:23.711 ************************************ 00:14:23.711 11:25:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:14:23.711 * Looking for test storage... 00:14:23.711 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:14:23.711 11:25:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:23.711 11:25:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # lcov --version 00:14:23.711 11:25:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:23.969 11:25:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:23.969 11:25:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:23.969 11:25:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:23.969 11:25:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:23.969 11:25:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:14:23.969 11:25:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:14:23.969 11:25:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:14:23.969 11:25:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:14:23.969 11:25:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:14:23.969 11:25:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:14:23.969 11:25:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:14:23.969 11:25:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:23.969 11:25:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:14:23.969 11:25:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:14:23.969 11:25:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:23.969 11:25:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:23.969 11:25:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:14:23.969 11:25:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:14:23.969 11:25:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:23.969 11:25:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:14:23.969 11:25:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:14:23.969 11:25:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:14:23.969 11:25:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:14:23.969 11:25:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:23.969 11:25:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:14:23.969 11:25:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:14:23.969 11:25:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:23.969 11:25:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:23.969 11:25:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:14:23.969 11:25:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:23.969 11:25:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:23.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:23.969 --rc genhtml_branch_coverage=1 00:14:23.969 --rc genhtml_function_coverage=1 00:14:23.969 --rc genhtml_legend=1 00:14:23.969 --rc geninfo_all_blocks=1 00:14:23.969 --rc geninfo_unexecuted_blocks=1 00:14:23.969 00:14:23.969 ' 00:14:23.969 11:25:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:23.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:23.969 --rc genhtml_branch_coverage=1 00:14:23.969 --rc genhtml_function_coverage=1 00:14:23.969 --rc genhtml_legend=1 00:14:23.969 --rc geninfo_all_blocks=1 00:14:23.969 --rc geninfo_unexecuted_blocks=1 00:14:23.969 00:14:23.969 ' 00:14:23.969 11:25:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:23.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:23.969 --rc genhtml_branch_coverage=1 00:14:23.969 --rc genhtml_function_coverage=1 00:14:23.969 --rc genhtml_legend=1 00:14:23.969 --rc geninfo_all_blocks=1 00:14:23.969 --rc geninfo_unexecuted_blocks=1 00:14:23.969 00:14:23.969 ' 00:14:23.969 11:25:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:23.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:23.969 --rc genhtml_branch_coverage=1 00:14:23.969 --rc genhtml_function_coverage=1 00:14:23.970 --rc genhtml_legend=1 00:14:23.970 --rc geninfo_all_blocks=1 00:14:23.970 --rc geninfo_unexecuted_blocks=1 00:14:23.970 00:14:23.970 ' 00:14:23.970 11:25:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:14:23.970 11:25:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:14:23.970 11:25:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:14:23.970 11:25:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:14:23.970 11:25:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:14:23.970 11:25:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:14:23.970 11:25:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # bdfs=() 00:14:23.970 11:25:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # local bdfs 00:14:23.970 11:25:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:14:23.970 11:25:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:14:23.970 11:25:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # bdfs=() 00:14:23.970 11:25:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # local bdfs 00:14:23.970 11:25:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:14:23.970 11:25:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:23.970 11:25:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:14:23.970 11:25:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:14:23.970 11:25:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:14:23.970 11:25:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:14:23.970 11:25:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:14:23.970 11:25:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:14:23.970 11:25:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=66015 00:14:23.970 11:25:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:14:23.970 11:25:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:14:23.970 11:25:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 66015 00:14:23.970 11:25:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@831 -- # '[' -z 66015 ']' 00:14:23.970 11:25:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:23.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:23.970 11:25:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:23.970 11:25:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:23.970 11:25:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:23.970 11:25:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:14:24.228 [2024-10-07 11:25:05.715479] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:14:24.228 [2024-10-07 11:25:05.715680] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66015 ] 00:14:24.228 [2024-10-07 11:25:05.895261] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:24.486 [2024-10-07 11:25:06.125194] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:14:24.486 [2024-10-07 11:25:06.125419] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:14:24.486 [2024-10-07 11:25:06.125571] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:14:24.486 [2024-10-07 11:25:06.125601] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:25.422 11:25:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:25.422 11:25:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # return 0 00:14:25.422 11:25:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:14:25.422 11:25:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.422 11:25:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:14:25.422 nvme0n1 00:14:25.422 11:25:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.680 11:25:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:14:25.680 11:25:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_J45ar.txt 00:14:25.680 11:25:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:14:25.680 11:25:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.680 11:25:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:14:25.680 true 00:14:25.680 11:25:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.680 11:25:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:14:25.680 11:25:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1728300307 00:14:25.680 11:25:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=66049 00:14:25.680 11:25:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:14:25.680 11:25:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:14:25.680 11:25:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:14:27.607 11:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:14:27.607 11:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.607 11:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:14:27.607 [2024-10-07 11:25:09.161140] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:14:27.607 [2024-10-07 11:25:09.161558] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:27.607 [2024-10-07 11:25:09.161590] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:27.607 [2024-10-07 11:25:09.161608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:27.607 [2024-10-07 11:25:09.164834] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:27.607 11:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.607 11:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 66049 00:14:27.607 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 66049 00:14:27.607 11:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 66049 00:14:27.607 11:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:14:27.607 11:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:14:27.607 11:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:14:27.607 11:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.607 11:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:14:27.607 11:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.607 11:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:14:27.607 11:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_J45ar.txt 00:14:27.607 11:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:14:27.607 11:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:14:27.607 11:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:14:27.607 11:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:14:27.607 11:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:14:27.607 11:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:14:27.607 11:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:14:27.607 11:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:14:27.607 11:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:14:27.607 11:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:14:27.607 11:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:14:27.607 11:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:14:27.607 11:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:14:27.607 11:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:14:27.607 11:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:14:27.607 11:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:14:27.607 11:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:14:27.607 11:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:14:27.607 11:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:14:27.607 11:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_J45ar.txt 00:14:27.607 11:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 66015 00:14:27.607 11:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@950 -- # '[' -z 66015 ']' 00:14:27.607 11:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # kill -0 66015 00:14:27.607 11:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # uname 00:14:27.607 11:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:27.607 11:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66015 00:14:27.607 killing process with pid 66015 00:14:27.607 11:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:27.607 11:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:27.607 11:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66015' 00:14:27.607 11:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@969 -- # kill 66015 00:14:27.607 11:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@974 -- # wait 66015 00:14:30.916 11:25:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:14:30.916 11:25:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:14:30.916 00:14:30.916 real 0m6.737s 00:14:30.916 user 0m22.784s 00:14:30.916 sys 0m0.804s 00:14:30.916 11:25:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:30.916 11:25:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:14:30.916 ************************************ 00:14:30.916 END TEST bdev_nvme_reset_stuck_adm_cmd 00:14:30.916 ************************************ 00:14:30.916 11:25:12 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:14:30.916 11:25:12 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:14:30.916 11:25:12 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:30.916 11:25:12 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:30.916 11:25:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:30.916 ************************************ 00:14:30.916 START TEST nvme_fio 00:14:30.916 ************************************ 00:14:30.916 11:25:12 nvme.nvme_fio -- common/autotest_common.sh@1125 -- # nvme_fio_test 00:14:30.916 11:25:12 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:14:30.916 11:25:12 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:14:30.916 11:25:12 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:14:30.916 11:25:12 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # bdfs=() 00:14:30.916 11:25:12 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # local bdfs 00:14:30.916 11:25:12 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:14:30.916 11:25:12 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:30.916 11:25:12 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:14:30.916 11:25:12 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:14:30.916 11:25:12 nvme.nvme_fio -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:14:30.916 11:25:12 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:14:30.916 11:25:12 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:14:30.916 11:25:12 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:14:30.916 11:25:12 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:14:30.916 11:25:12 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:14:30.916 11:25:12 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:14:30.916 11:25:12 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:14:31.175 11:25:12 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:14:31.175 11:25:12 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:14:31.175 11:25:12 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:14:31.175 11:25:12 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:14:31.175 11:25:12 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:31.175 11:25:12 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:14:31.175 11:25:12 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:31.175 11:25:12 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:14:31.175 11:25:12 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:14:31.175 11:25:12 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:14:31.175 11:25:12 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:14:31.175 11:25:12 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:14:31.175 11:25:12 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:31.175 11:25:12 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:31.175 11:25:12 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:31.175 11:25:12 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:14:31.175 11:25:12 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:31.175 11:25:12 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:14:31.434 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:14:31.434 fio-3.35 00:14:31.434 Starting 1 thread 00:14:35.651 00:14:35.651 test: (groupid=0, jobs=1): err= 0: pid=66203: Mon Oct 7 11:25:16 2024 00:14:35.651 read: IOPS=21.3k, BW=83.4MiB/s (87.5MB/s)(167MiB/2001msec) 00:14:35.651 slat (usec): min=4, max=246, avg= 4.87, stdev= 1.99 00:14:35.651 clat (usec): min=203, max=10612, avg=2992.73, stdev=517.15 00:14:35.651 lat (usec): min=207, max=10688, avg=2997.61, stdev=517.85 00:14:35.651 clat percentiles (usec): 00:14:35.651 | 1.00th=[ 2507], 5.00th=[ 2769], 10.00th=[ 2802], 20.00th=[ 2868], 00:14:35.651 | 30.00th=[ 2868], 40.00th=[ 2900], 50.00th=[ 2933], 60.00th=[ 2933], 00:14:35.651 | 70.00th=[ 2966], 80.00th=[ 2999], 90.00th=[ 3064], 95.00th=[ 3294], 00:14:35.651 | 99.00th=[ 5735], 99.50th=[ 7242], 99.90th=[ 9634], 99.95th=[ 9634], 00:14:35.651 | 99.99th=[10421] 00:14:35.651 bw ( KiB/s): min=84160, max=87776, per=100.00%, avg=86136.00, stdev=1831.27, samples=3 00:14:35.651 iops : min=21040, max=21944, avg=21534.00, stdev=457.82, samples=3 00:14:35.651 write: IOPS=21.2k, BW=82.8MiB/s (86.8MB/s)(166MiB/2001msec); 0 zone resets 00:14:35.651 slat (usec): min=4, max=234, avg= 5.02, stdev= 1.70 00:14:35.651 clat (usec): min=237, max=10512, avg=2997.05, stdev=525.10 00:14:35.651 lat (usec): min=242, max=10523, avg=3002.07, stdev=525.81 00:14:35.651 clat percentiles (usec): 00:14:35.651 | 1.00th=[ 2507], 5.00th=[ 2769], 10.00th=[ 2802], 20.00th=[ 2868], 00:14:35.651 | 30.00th=[ 2900], 40.00th=[ 2900], 50.00th=[ 2933], 60.00th=[ 2966], 00:14:35.651 | 70.00th=[ 2966], 80.00th=[ 2999], 90.00th=[ 3097], 95.00th=[ 3294], 00:14:35.651 | 99.00th=[ 5669], 99.50th=[ 7439], 99.90th=[ 9634], 99.95th=[ 9634], 00:14:35.651 | 99.99th=[10028] 00:14:35.651 bw ( KiB/s): min=84080, max=87560, per=100.00%, avg=86293.33, stdev=1923.47, samples=3 00:14:35.651 iops : min=21020, max=21890, avg=21573.33, stdev=480.87, samples=3 00:14:35.651 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:14:35.651 lat (msec) : 2=0.25%, 4=97.81%, 10=1.88%, 20=0.01% 00:14:35.651 cpu : usr=98.75%, sys=0.25%, ctx=15, majf=0, minf=608 00:14:35.651 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:14:35.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:35.651 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:35.651 issued rwts: total=42722,42409,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:35.651 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:35.651 00:14:35.651 Run status group 0 (all jobs): 00:14:35.651 READ: bw=83.4MiB/s (87.5MB/s), 83.4MiB/s-83.4MiB/s (87.5MB/s-87.5MB/s), io=167MiB (175MB), run=2001-2001msec 00:14:35.651 WRITE: bw=82.8MiB/s (86.8MB/s), 82.8MiB/s-82.8MiB/s (86.8MB/s-86.8MB/s), io=166MiB (174MB), run=2001-2001msec 00:14:35.651 ----------------------------------------------------- 00:14:35.651 Suppressions used: 00:14:35.651 count bytes template 00:14:35.651 1 32 /usr/src/fio/parse.c 00:14:35.651 1 8 libtcmalloc_minimal.so 00:14:35.651 ----------------------------------------------------- 00:14:35.651 00:14:35.651 11:25:16 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:14:35.651 11:25:16 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:14:35.651 11:25:16 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:14:35.651 11:25:16 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:14:35.651 11:25:17 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:14:35.651 11:25:17 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:14:35.651 11:25:17 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:14:35.651 11:25:17 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:14:35.651 11:25:17 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:14:35.651 11:25:17 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:14:35.651 11:25:17 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:35.651 11:25:17 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:14:35.651 11:25:17 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:35.651 11:25:17 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:14:35.651 11:25:17 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:14:35.651 11:25:17 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:14:35.651 11:25:17 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:14:35.651 11:25:17 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:14:35.651 11:25:17 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:35.651 11:25:17 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:35.651 11:25:17 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:35.651 11:25:17 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:14:35.651 11:25:17 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:35.651 11:25:17 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:14:35.909 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:14:35.909 fio-3.35 00:14:35.909 Starting 1 thread 00:14:39.203 00:14:39.203 test: (groupid=0, jobs=1): err= 0: pid=66269: Mon Oct 7 11:25:20 2024 00:14:39.203 read: IOPS=19.5k, BW=76.1MiB/s (79.8MB/s)(152MiB/2001msec) 00:14:39.203 slat (nsec): min=3918, max=86607, avg=5603.61, stdev=1929.13 00:14:39.203 clat (usec): min=247, max=11970, avg=3270.82, stdev=780.50 00:14:39.203 lat (usec): min=252, max=12057, avg=3276.43, stdev=781.59 00:14:39.203 clat percentiles (usec): 00:14:39.203 | 1.00th=[ 1893], 5.00th=[ 2769], 10.00th=[ 2835], 20.00th=[ 2900], 00:14:39.203 | 30.00th=[ 2966], 40.00th=[ 3032], 50.00th=[ 3130], 60.00th=[ 3195], 00:14:39.203 | 70.00th=[ 3261], 80.00th=[ 3359], 90.00th=[ 3851], 95.00th=[ 4490], 00:14:39.203 | 99.00th=[ 7242], 99.50th=[ 8356], 99.90th=[ 9503], 99.95th=[ 9765], 00:14:39.203 | 99.99th=[11600] 00:14:39.203 bw ( KiB/s): min=69544, max=84632, per=99.12%, avg=77280.00, stdev=7551.33, samples=3 00:14:39.203 iops : min=17386, max=21158, avg=19320.00, stdev=1887.83, samples=3 00:14:39.203 write: IOPS=19.5k, BW=76.0MiB/s (79.7MB/s)(152MiB/2001msec); 0 zone resets 00:14:39.203 slat (nsec): min=4024, max=70260, avg=5885.10, stdev=1924.52 00:14:39.203 clat (usec): min=227, max=11706, avg=3270.61, stdev=772.05 00:14:39.203 lat (usec): min=233, max=11719, avg=3276.49, stdev=773.12 00:14:39.203 clat percentiles (usec): 00:14:39.203 | 1.00th=[ 1893], 5.00th=[ 2737], 10.00th=[ 2835], 20.00th=[ 2900], 00:14:39.203 | 30.00th=[ 2966], 40.00th=[ 3032], 50.00th=[ 3130], 60.00th=[ 3195], 00:14:39.203 | 70.00th=[ 3261], 80.00th=[ 3359], 90.00th=[ 3851], 95.00th=[ 4490], 00:14:39.203 | 99.00th=[ 7111], 99.50th=[ 8225], 99.90th=[ 9503], 99.95th=[ 9765], 00:14:39.203 | 99.99th=[11338] 00:14:39.203 bw ( KiB/s): min=69640, max=84520, per=99.43%, avg=77392.00, stdev=7459.60, samples=3 00:14:39.203 iops : min=17410, max=21130, avg=19348.00, stdev=1864.90, samples=3 00:14:39.203 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:14:39.203 lat (msec) : 2=1.19%, 4=90.53%, 10=8.20%, 20=0.04% 00:14:39.203 cpu : usr=99.15%, sys=0.05%, ctx=5, majf=0, minf=609 00:14:39.203 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:14:39.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:39.203 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:39.203 issued rwts: total=39002,38939,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:39.203 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:39.203 00:14:39.203 Run status group 0 (all jobs): 00:14:39.203 READ: bw=76.1MiB/s (79.8MB/s), 76.1MiB/s-76.1MiB/s (79.8MB/s-79.8MB/s), io=152MiB (160MB), run=2001-2001msec 00:14:39.203 WRITE: bw=76.0MiB/s (79.7MB/s), 76.0MiB/s-76.0MiB/s (79.7MB/s-79.7MB/s), io=152MiB (159MB), run=2001-2001msec 00:14:39.472 ----------------------------------------------------- 00:14:39.472 Suppressions used: 00:14:39.472 count bytes template 00:14:39.472 1 32 /usr/src/fio/parse.c 00:14:39.472 1 8 libtcmalloc_minimal.so 00:14:39.472 ----------------------------------------------------- 00:14:39.472 00:14:39.472 11:25:21 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:14:39.472 11:25:21 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:14:39.472 11:25:21 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:14:39.472 11:25:21 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:14:39.744 11:25:21 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:14:39.744 11:25:21 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:14:40.313 11:25:21 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:14:40.313 11:25:21 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:14:40.313 11:25:21 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:14:40.313 11:25:21 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:14:40.313 11:25:21 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:40.313 11:25:21 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:14:40.313 11:25:21 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:40.314 11:25:21 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:14:40.314 11:25:21 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:14:40.314 11:25:21 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:14:40.314 11:25:21 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:40.314 11:25:21 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:14:40.314 11:25:21 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:14:40.314 11:25:21 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:40.314 11:25:21 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:40.314 11:25:21 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:14:40.314 11:25:21 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:40.314 11:25:21 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:14:40.314 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:14:40.314 fio-3.35 00:14:40.314 Starting 1 thread 00:14:44.561 00:14:44.561 test: (groupid=0, jobs=1): err= 0: pid=66332: Mon Oct 7 11:25:25 2024 00:14:44.561 read: IOPS=20.5k, BW=80.0MiB/s (83.9MB/s)(160MiB/2001msec) 00:14:44.561 slat (usec): min=3, max=101, avg= 5.16, stdev= 1.92 00:14:44.561 clat (usec): min=1083, max=12180, avg=3111.66, stdev=908.67 00:14:44.561 lat (usec): min=1087, max=12184, avg=3116.82, stdev=909.81 00:14:44.561 clat percentiles (usec): 00:14:44.561 | 1.00th=[ 1926], 5.00th=[ 2409], 10.00th=[ 2704], 20.00th=[ 2835], 00:14:44.561 | 30.00th=[ 2868], 40.00th=[ 2900], 50.00th=[ 2933], 60.00th=[ 2966], 00:14:44.561 | 70.00th=[ 2999], 80.00th=[ 3064], 90.00th=[ 3261], 95.00th=[ 4752], 00:14:44.561 | 99.00th=[ 7898], 99.50th=[ 8160], 99.90th=[10290], 99.95th=[10683], 00:14:44.561 | 99.99th=[11600] 00:14:44.561 bw ( KiB/s): min=81272, max=83360, per=100.00%, avg=82058.67, stdev=1135.16, samples=3 00:14:44.561 iops : min=20318, max=20840, avg=20514.67, stdev=283.79, samples=3 00:14:44.561 write: IOPS=20.4k, BW=79.8MiB/s (83.7MB/s)(160MiB/2001msec); 0 zone resets 00:14:44.561 slat (nsec): min=4078, max=60340, avg=5356.50, stdev=1908.83 00:14:44.561 clat (usec): min=323, max=14122, avg=3114.45, stdev=908.04 00:14:44.561 lat (usec): min=329, max=14127, avg=3119.81, stdev=909.21 00:14:44.561 clat percentiles (usec): 00:14:44.561 | 1.00th=[ 1942], 5.00th=[ 2409], 10.00th=[ 2704], 20.00th=[ 2835], 00:14:44.561 | 30.00th=[ 2900], 40.00th=[ 2933], 50.00th=[ 2933], 60.00th=[ 2966], 00:14:44.561 | 70.00th=[ 3032], 80.00th=[ 3064], 90.00th=[ 3294], 95.00th=[ 4752], 00:14:44.561 | 99.00th=[ 7898], 99.50th=[ 8160], 99.90th=[10290], 99.95th=[10814], 00:14:44.561 | 99.99th=[11863] 00:14:44.561 bw ( KiB/s): min=81344, max=83368, per=100.00%, avg=82125.33, stdev=1088.01, samples=3 00:14:44.561 iops : min=20336, max=20842, avg=20531.33, stdev=272.00, samples=3 00:14:44.561 lat (usec) : 500=0.01% 00:14:44.561 lat (msec) : 2=1.25%, 4=92.54%, 10=6.06%, 20=0.15% 00:14:44.561 cpu : usr=99.15%, sys=0.10%, ctx=6, majf=0, minf=608 00:14:44.561 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:14:44.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:44.561 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:44.561 issued rwts: total=40993,40897,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:44.561 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:44.561 00:14:44.561 Run status group 0 (all jobs): 00:14:44.562 READ: bw=80.0MiB/s (83.9MB/s), 80.0MiB/s-80.0MiB/s (83.9MB/s-83.9MB/s), io=160MiB (168MB), run=2001-2001msec 00:14:44.562 WRITE: bw=79.8MiB/s (83.7MB/s), 79.8MiB/s-79.8MiB/s (83.7MB/s-83.7MB/s), io=160MiB (168MB), run=2001-2001msec 00:14:44.562 ----------------------------------------------------- 00:14:44.562 Suppressions used: 00:14:44.562 count bytes template 00:14:44.562 1 32 /usr/src/fio/parse.c 00:14:44.562 1 8 libtcmalloc_minimal.so 00:14:44.562 ----------------------------------------------------- 00:14:44.562 00:14:44.562 11:25:25 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:14:44.562 11:25:25 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:14:44.562 11:25:25 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:14:44.562 11:25:25 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:14:44.562 11:25:26 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:14:44.562 11:25:26 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:14:44.820 11:25:26 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:14:44.820 11:25:26 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:14:44.820 11:25:26 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:14:44.820 11:25:26 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:14:44.820 11:25:26 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:44.820 11:25:26 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:14:44.820 11:25:26 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:44.820 11:25:26 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:14:44.820 11:25:26 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:14:44.820 11:25:26 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:14:44.821 11:25:26 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:44.821 11:25:26 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:14:44.821 11:25:26 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:14:45.079 11:25:26 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:45.079 11:25:26 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:45.079 11:25:26 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:14:45.079 11:25:26 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:45.079 11:25:26 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:14:45.079 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:14:45.079 fio-3.35 00:14:45.079 Starting 1 thread 00:14:51.670 00:14:51.670 test: (groupid=0, jobs=1): err= 0: pid=66393: Mon Oct 7 11:25:32 2024 00:14:51.670 read: IOPS=21.1k, BW=82.3MiB/s (86.3MB/s)(165MiB/2001msec) 00:14:51.670 slat (nsec): min=4016, max=51839, avg=5059.85, stdev=1582.69 00:14:51.670 clat (usec): min=229, max=9208, avg=3036.18, stdev=602.01 00:14:51.670 lat (usec): min=234, max=9221, avg=3041.24, stdev=602.76 00:14:51.670 clat percentiles (usec): 00:14:51.670 | 1.00th=[ 2089], 5.00th=[ 2474], 10.00th=[ 2704], 20.00th=[ 2868], 00:14:51.670 | 30.00th=[ 2900], 40.00th=[ 2933], 50.00th=[ 2966], 60.00th=[ 2999], 00:14:51.670 | 70.00th=[ 3064], 80.00th=[ 3097], 90.00th=[ 3195], 95.00th=[ 3392], 00:14:51.670 | 99.00th=[ 6128], 99.50th=[ 7570], 99.90th=[ 8717], 99.95th=[ 8848], 00:14:51.670 | 99.99th=[ 9241] 00:14:51.670 bw ( KiB/s): min=83016, max=85840, per=100.00%, avg=84810.67, stdev=1559.82, samples=3 00:14:51.670 iops : min=20754, max=21462, avg=21203.33, stdev=390.62, samples=3 00:14:51.670 write: IOPS=20.9k, BW=81.8MiB/s (85.8MB/s)(164MiB/2001msec); 0 zone resets 00:14:51.670 slat (nsec): min=4150, max=87183, avg=5277.36, stdev=1603.64 00:14:51.670 clat (usec): min=202, max=9189, avg=3035.66, stdev=596.68 00:14:51.670 lat (usec): min=207, max=9202, avg=3040.94, stdev=597.44 00:14:51.670 clat percentiles (usec): 00:14:51.670 | 1.00th=[ 2089], 5.00th=[ 2474], 10.00th=[ 2704], 20.00th=[ 2868], 00:14:51.670 | 30.00th=[ 2900], 40.00th=[ 2933], 50.00th=[ 2966], 60.00th=[ 3032], 00:14:51.670 | 70.00th=[ 3064], 80.00th=[ 3097], 90.00th=[ 3195], 95.00th=[ 3392], 00:14:51.670 | 99.00th=[ 6128], 99.50th=[ 7570], 99.90th=[ 8717], 99.95th=[ 8717], 00:14:51.670 | 99.99th=[ 9110] 00:14:51.670 bw ( KiB/s): min=83176, max=86000, per=100.00%, avg=84922.67, stdev=1526.35, samples=3 00:14:51.670 iops : min=20794, max=21500, avg=21230.67, stdev=381.59, samples=3 00:14:51.670 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:14:51.670 lat (msec) : 2=0.64%, 4=96.36%, 10=2.97% 00:14:51.670 cpu : usr=99.20%, sys=0.00%, ctx=3, majf=0, minf=606 00:14:51.670 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:14:51.670 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:51.670 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:51.670 issued rwts: total=42164,41914,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:51.670 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:51.670 00:14:51.670 Run status group 0 (all jobs): 00:14:51.670 READ: bw=82.3MiB/s (86.3MB/s), 82.3MiB/s-82.3MiB/s (86.3MB/s-86.3MB/s), io=165MiB (173MB), run=2001-2001msec 00:14:51.670 WRITE: bw=81.8MiB/s (85.8MB/s), 81.8MiB/s-81.8MiB/s (85.8MB/s-85.8MB/s), io=164MiB (172MB), run=2001-2001msec 00:14:51.670 ----------------------------------------------------- 00:14:51.670 Suppressions used: 00:14:51.670 count bytes template 00:14:51.670 1 32 /usr/src/fio/parse.c 00:14:51.670 1 8 libtcmalloc_minimal.so 00:14:51.670 ----------------------------------------------------- 00:14:51.670 00:14:51.670 11:25:32 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:14:51.670 11:25:32 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:14:51.670 00:14:51.670 real 0m20.562s 00:14:51.670 user 0m15.044s 00:14:51.670 sys 0m6.758s 00:14:51.670 11:25:32 nvme.nvme_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:51.670 11:25:32 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:14:51.670 ************************************ 00:14:51.670 END TEST nvme_fio 00:14:51.670 ************************************ 00:14:51.670 ************************************ 00:14:51.670 END TEST nvme 00:14:51.670 ************************************ 00:14:51.670 00:14:51.670 real 1m36.369s 00:14:51.670 user 3m44.146s 00:14:51.670 sys 0m26.290s 00:14:51.670 11:25:32 nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:51.670 11:25:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:51.670 11:25:32 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:14:51.670 11:25:32 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:14:51.670 11:25:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:51.670 11:25:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:51.670 11:25:32 -- common/autotest_common.sh@10 -- # set +x 00:14:51.670 ************************************ 00:14:51.670 START TEST nvme_scc 00:14:51.670 ************************************ 00:14:51.670 11:25:32 nvme_scc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:14:51.670 * Looking for test storage... 00:14:51.670 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:14:51.670 11:25:32 nvme_scc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:51.670 11:25:32 nvme_scc -- common/autotest_common.sh@1681 -- # lcov --version 00:14:51.670 11:25:32 nvme_scc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:51.670 11:25:32 nvme_scc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:51.670 11:25:32 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:51.670 11:25:32 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:51.670 11:25:32 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:51.670 11:25:32 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:14:51.670 11:25:32 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:14:51.670 11:25:32 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:14:51.670 11:25:32 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:14:51.670 11:25:32 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:14:51.670 11:25:32 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:14:51.670 11:25:32 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:14:51.670 11:25:32 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:51.670 11:25:32 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:14:51.670 11:25:32 nvme_scc -- scripts/common.sh@345 -- # : 1 00:14:51.670 11:25:32 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:51.670 11:25:32 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:51.670 11:25:32 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:14:51.670 11:25:32 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:14:51.670 11:25:32 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:51.670 11:25:32 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:14:51.670 11:25:32 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:14:51.670 11:25:32 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:14:51.670 11:25:32 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:14:51.670 11:25:32 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:51.671 11:25:32 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:14:51.671 11:25:32 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:14:51.671 11:25:32 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:51.671 11:25:32 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:51.671 11:25:32 nvme_scc -- scripts/common.sh@368 -- # return 0 00:14:51.671 11:25:32 nvme_scc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:51.671 11:25:32 nvme_scc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:51.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:51.671 --rc genhtml_branch_coverage=1 00:14:51.671 --rc genhtml_function_coverage=1 00:14:51.671 --rc genhtml_legend=1 00:14:51.671 --rc geninfo_all_blocks=1 00:14:51.671 --rc geninfo_unexecuted_blocks=1 00:14:51.671 00:14:51.671 ' 00:14:51.671 11:25:32 nvme_scc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:51.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:51.671 --rc genhtml_branch_coverage=1 00:14:51.671 --rc genhtml_function_coverage=1 00:14:51.671 --rc genhtml_legend=1 00:14:51.671 --rc geninfo_all_blocks=1 00:14:51.671 --rc geninfo_unexecuted_blocks=1 00:14:51.671 00:14:51.671 ' 00:14:51.671 11:25:32 nvme_scc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:51.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:51.671 --rc genhtml_branch_coverage=1 00:14:51.671 --rc genhtml_function_coverage=1 00:14:51.671 --rc genhtml_legend=1 00:14:51.671 --rc geninfo_all_blocks=1 00:14:51.671 --rc geninfo_unexecuted_blocks=1 00:14:51.671 00:14:51.671 ' 00:14:51.671 11:25:32 nvme_scc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:51.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:51.671 --rc genhtml_branch_coverage=1 00:14:51.671 --rc genhtml_function_coverage=1 00:14:51.671 --rc genhtml_legend=1 00:14:51.671 --rc geninfo_all_blocks=1 00:14:51.671 --rc geninfo_unexecuted_blocks=1 00:14:51.671 00:14:51.671 ' 00:14:51.671 11:25:32 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:14:51.671 11:25:32 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:14:51.671 11:25:32 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:14:51.671 11:25:32 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:14:51.671 11:25:32 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:51.671 11:25:32 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:14:51.671 11:25:32 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:51.671 11:25:32 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:51.671 11:25:32 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:51.671 11:25:32 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.671 11:25:32 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.671 11:25:32 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.671 11:25:32 nvme_scc -- paths/export.sh@5 -- # export PATH 00:14:51.671 11:25:32 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.671 11:25:32 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:14:51.671 11:25:32 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:14:51.671 11:25:32 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:14:51.671 11:25:32 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:14:51.671 11:25:32 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:14:51.671 11:25:32 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:14:51.671 11:25:32 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:14:51.671 11:25:32 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:14:51.671 11:25:32 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:14:51.671 11:25:32 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:51.671 11:25:32 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:14:51.671 11:25:32 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:14:51.671 11:25:32 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:14:51.671 11:25:32 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:51.671 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:51.943 Waiting for block devices as requested 00:14:52.201 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:14:52.201 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:14:52.201 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:14:52.459 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:14:57.742 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:14:57.742 11:25:39 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:14:57.742 11:25:39 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:14:57.742 11:25:39 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:14:57.742 11:25:39 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:14:57.742 11:25:39 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:14:57.742 11:25:39 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:14:57.742 11:25:39 nvme_scc -- scripts/common.sh@18 -- # local i 00:14:57.742 11:25:39 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:14:57.742 11:25:39 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:14:57.742 11:25:39 nvme_scc -- scripts/common.sh@27 -- # return 0 00:14:57.742 11:25:39 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:14:57.742 11:25:39 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:14:57.742 11:25:39 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:14:57.742 11:25:39 nvme_scc -- nvme/functions.sh@18 -- # shift 00:14:57.742 11:25:39 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:14:57.742 11:25:39 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:14:57.742 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.742 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.742 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:57.742 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.742 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.742 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:14:57.742 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.743 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:14:57.744 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@18 -- # shift 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.745 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.746 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:14:57.747 11:25:39 nvme_scc -- scripts/common.sh@18 -- # local i 00:14:57.747 11:25:39 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:14:57.747 11:25:39 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:14:57.747 11:25:39 nvme_scc -- scripts/common.sh@27 -- # return 0 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@18 -- # shift 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.747 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.748 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:14:57.749 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@18 -- # shift 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:14:57.750 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:14:57.751 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:14:57.752 11:25:39 nvme_scc -- scripts/common.sh@18 -- # local i 00:14:57.752 11:25:39 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:14:57.752 11:25:39 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:14:57.752 11:25:39 nvme_scc -- scripts/common.sh@27 -- # return 0 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@18 -- # shift 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.752 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.753 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:14:57.754 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@18 -- # shift 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:57.755 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:14:57.756 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:14:57.756 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.756 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.756 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:14:57.756 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:14:57.756 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:14:57.756 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.756 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.756 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:57.756 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:14:57.756 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:14:57.756 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.756 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.756 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:14:57.756 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:14:57.756 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:14:57.756 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:57.756 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:57.756 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:57.756 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.019 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@18 -- # shift 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.020 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:14:58.021 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@18 -- # shift 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.022 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:14:58.023 11:25:39 nvme_scc -- scripts/common.sh@18 -- # local i 00:14:58.023 11:25:39 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:14:58.023 11:25:39 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:14:58.023 11:25:39 nvme_scc -- scripts/common.sh@27 -- # return 0 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@18 -- # shift 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:14:58.023 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:14:58.024 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:14:58.025 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:14:58.026 11:25:39 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:14:58.026 11:25:39 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:14:58.027 11:25:39 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:14:58.027 11:25:39 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:14:58.027 11:25:39 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:14:58.027 11:25:39 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:14:58.027 11:25:39 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:14:58.027 11:25:39 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:14:58.027 11:25:39 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:14:58.027 11:25:39 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:14:58.027 11:25:39 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:14:58.027 11:25:39 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:14:58.027 11:25:39 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:14:58.027 11:25:39 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:14:58.027 11:25:39 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:14:58.027 11:25:39 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:14:58.027 11:25:39 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:14:58.027 11:25:39 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:14:58.027 11:25:39 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:14:58.027 11:25:39 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:14:58.027 11:25:39 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:14:58.027 11:25:39 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:14:58.027 11:25:39 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:14:58.027 11:25:39 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:14:58.027 11:25:39 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:14:58.027 11:25:39 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:14:58.027 11:25:39 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:14:58.027 11:25:39 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:14:58.027 11:25:39 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:14:58.027 11:25:39 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:14:58.027 11:25:39 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:14:58.027 11:25:39 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:14:58.027 11:25:39 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:14:58.027 11:25:39 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:14:58.027 11:25:39 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:14:58.027 11:25:39 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:14:58.027 11:25:39 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:14:58.027 11:25:39 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:14:58.027 11:25:39 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:14:58.027 11:25:39 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:14:58.027 11:25:39 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:14:58.027 11:25:39 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:14:58.027 11:25:39 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:14:58.027 11:25:39 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:14:58.027 11:25:39 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:58.961 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:59.527 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:59.527 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:14:59.527 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:14:59.527 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:14:59.785 11:25:41 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:14:59.785 11:25:41 nvme_scc -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:14:59.785 11:25:41 nvme_scc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:59.785 11:25:41 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:14:59.785 ************************************ 00:14:59.785 START TEST nvme_simple_copy 00:14:59.785 ************************************ 00:14:59.785 11:25:41 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:15:00.044 Initializing NVMe Controllers 00:15:00.044 Attaching to 0000:00:10.0 00:15:00.044 Controller supports SCC. Attached to 0000:00:10.0 00:15:00.044 Namespace ID: 1 size: 6GB 00:15:00.044 Initialization complete. 00:15:00.044 00:15:00.044 Controller QEMU NVMe Ctrl (12340 ) 00:15:00.044 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:15:00.044 Namespace Block Size:4096 00:15:00.044 Writing LBAs 0 to 63 with Random Data 00:15:00.044 Copied LBAs from 0 - 63 to the Destination LBA 256 00:15:00.044 LBAs matching Written Data: 64 00:15:00.044 00:15:00.044 ************************************ 00:15:00.044 END TEST nvme_simple_copy 00:15:00.044 ************************************ 00:15:00.044 real 0m0.332s 00:15:00.044 user 0m0.110s 00:15:00.044 sys 0m0.118s 00:15:00.044 11:25:41 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:00.044 11:25:41 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:15:00.044 00:15:00.044 real 0m8.985s 00:15:00.044 user 0m1.553s 00:15:00.044 sys 0m2.403s 00:15:00.044 11:25:41 nvme_scc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:00.044 11:25:41 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:15:00.044 ************************************ 00:15:00.044 END TEST nvme_scc 00:15:00.044 ************************************ 00:15:00.303 11:25:41 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:15:00.303 11:25:41 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:15:00.303 11:25:41 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:15:00.303 11:25:41 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:15:00.303 11:25:41 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:15:00.303 11:25:41 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:00.303 11:25:41 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:00.303 11:25:41 -- common/autotest_common.sh@10 -- # set +x 00:15:00.303 ************************************ 00:15:00.303 START TEST nvme_fdp 00:15:00.303 ************************************ 00:15:00.303 11:25:41 nvme_fdp -- common/autotest_common.sh@1125 -- # test/nvme/nvme_fdp.sh 00:15:00.303 * Looking for test storage... 00:15:00.303 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:15:00.303 11:25:41 nvme_fdp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:00.303 11:25:41 nvme_fdp -- common/autotest_common.sh@1681 -- # lcov --version 00:15:00.303 11:25:41 nvme_fdp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:00.303 11:25:41 nvme_fdp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:00.303 11:25:41 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:00.303 11:25:41 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:00.303 11:25:41 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:00.303 11:25:41 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:15:00.303 11:25:41 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:15:00.303 11:25:41 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:15:00.303 11:25:41 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:15:00.303 11:25:41 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:15:00.303 11:25:41 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:15:00.303 11:25:41 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:15:00.303 11:25:41 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:00.303 11:25:41 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:15:00.303 11:25:41 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:15:00.303 11:25:41 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:00.303 11:25:41 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:00.303 11:25:41 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:15:00.303 11:25:41 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:15:00.303 11:25:41 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:00.303 11:25:41 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:15:00.303 11:25:41 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:15:00.303 11:25:41 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:15:00.303 11:25:41 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:15:00.303 11:25:41 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:00.303 11:25:41 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:15:00.303 11:25:42 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:15:00.303 11:25:42 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:00.303 11:25:42 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:00.303 11:25:42 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:15:00.304 11:25:42 nvme_fdp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:00.304 11:25:42 nvme_fdp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:00.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:00.304 --rc genhtml_branch_coverage=1 00:15:00.304 --rc genhtml_function_coverage=1 00:15:00.304 --rc genhtml_legend=1 00:15:00.304 --rc geninfo_all_blocks=1 00:15:00.304 --rc geninfo_unexecuted_blocks=1 00:15:00.304 00:15:00.304 ' 00:15:00.304 11:25:42 nvme_fdp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:00.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:00.304 --rc genhtml_branch_coverage=1 00:15:00.304 --rc genhtml_function_coverage=1 00:15:00.304 --rc genhtml_legend=1 00:15:00.304 --rc geninfo_all_blocks=1 00:15:00.304 --rc geninfo_unexecuted_blocks=1 00:15:00.304 00:15:00.304 ' 00:15:00.304 11:25:42 nvme_fdp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:00.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:00.304 --rc genhtml_branch_coverage=1 00:15:00.304 --rc genhtml_function_coverage=1 00:15:00.304 --rc genhtml_legend=1 00:15:00.304 --rc geninfo_all_blocks=1 00:15:00.304 --rc geninfo_unexecuted_blocks=1 00:15:00.304 00:15:00.304 ' 00:15:00.304 11:25:42 nvme_fdp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:00.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:00.304 --rc genhtml_branch_coverage=1 00:15:00.304 --rc genhtml_function_coverage=1 00:15:00.304 --rc genhtml_legend=1 00:15:00.304 --rc geninfo_all_blocks=1 00:15:00.304 --rc geninfo_unexecuted_blocks=1 00:15:00.304 00:15:00.304 ' 00:15:00.304 11:25:42 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:15:00.304 11:25:42 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:15:00.563 11:25:42 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:15:00.563 11:25:42 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:00.563 11:25:42 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:00.563 11:25:42 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:15:00.563 11:25:42 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:00.563 11:25:42 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:00.563 11:25:42 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:00.563 11:25:42 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.563 11:25:42 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.563 11:25:42 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.563 11:25:42 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:15:00.563 11:25:42 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.563 11:25:42 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:15:00.563 11:25:42 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:15:00.563 11:25:42 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:15:00.563 11:25:42 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:15:00.563 11:25:42 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:15:00.563 11:25:42 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:15:00.563 11:25:42 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:15:00.563 11:25:42 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:15:00.563 11:25:42 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:15:00.563 11:25:42 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:00.563 11:25:42 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:01.130 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:01.130 Waiting for block devices as requested 00:15:01.388 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:15:01.388 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:15:01.646 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:15:01.646 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:15:06.925 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:15:06.925 11:25:48 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:15:06.925 11:25:48 nvme_fdp -- scripts/common.sh@18 -- # local i 00:15:06.925 11:25:48 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:15:06.925 11:25:48 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:15:06.925 11:25:48 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:15:06.925 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:15:06.926 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.927 11:25:48 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:15:06.928 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:15:06.929 11:25:48 nvme_fdp -- scripts/common.sh@18 -- # local i 00:15:06.929 11:25:48 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:15:06.929 11:25:48 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:15:06.929 11:25:48 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:15:06.929 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.930 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.931 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:15:06.932 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:15:06.933 11:25:48 nvme_fdp -- scripts/common.sh@18 -- # local i 00:15:06.933 11:25:48 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:15:06.933 11:25:48 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:15:06.933 11:25:48 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.933 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.934 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:15:06.935 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:06.936 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.198 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:15:07.199 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:15:07.200 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:15:07.201 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:07.202 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:15:07.203 11:25:48 nvme_fdp -- scripts/common.sh@18 -- # local i 00:15:07.203 11:25:48 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:15:07.203 11:25:48 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:15:07.203 11:25:48 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:15:07.203 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.204 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.205 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:15:07.206 11:25:48 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:15:07.206 11:25:48 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:15:07.206 11:25:48 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:15:07.206 11:25:48 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:15:07.206 11:25:48 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:08.141 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:08.707 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:15:08.707 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:15:08.707 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:15:08.965 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:15:08.965 11:25:50 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:15:08.965 11:25:50 nvme_fdp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:15:08.965 11:25:50 nvme_fdp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:08.965 11:25:50 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:15:08.965 ************************************ 00:15:08.965 START TEST nvme_flexible_data_placement 00:15:08.965 ************************************ 00:15:08.965 11:25:50 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:15:09.267 Initializing NVMe Controllers 00:15:09.267 Attaching to 0000:00:13.0 00:15:09.267 Controller supports FDP Attached to 0000:00:13.0 00:15:09.267 Namespace ID: 1 Endurance Group ID: 1 00:15:09.267 Initialization complete. 00:15:09.267 00:15:09.267 ================================== 00:15:09.267 == FDP tests for Namespace: #01 == 00:15:09.267 ================================== 00:15:09.267 00:15:09.267 Get Feature: FDP: 00:15:09.267 ================= 00:15:09.267 Enabled: Yes 00:15:09.267 FDP configuration Index: 0 00:15:09.267 00:15:09.267 FDP configurations log page 00:15:09.267 =========================== 00:15:09.267 Number of FDP configurations: 1 00:15:09.267 Version: 0 00:15:09.267 Size: 112 00:15:09.267 FDP Configuration Descriptor: 0 00:15:09.267 Descriptor Size: 96 00:15:09.267 Reclaim Group Identifier format: 2 00:15:09.267 FDP Volatile Write Cache: Not Present 00:15:09.267 FDP Configuration: Valid 00:15:09.267 Vendor Specific Size: 0 00:15:09.267 Number of Reclaim Groups: 2 00:15:09.267 Number of Recalim Unit Handles: 8 00:15:09.267 Max Placement Identifiers: 128 00:15:09.267 Number of Namespaces Suppprted: 256 00:15:09.267 Reclaim unit Nominal Size: 6000000 bytes 00:15:09.267 Estimated Reclaim Unit Time Limit: Not Reported 00:15:09.267 RUH Desc #000: RUH Type: Initially Isolated 00:15:09.267 RUH Desc #001: RUH Type: Initially Isolated 00:15:09.267 RUH Desc #002: RUH Type: Initially Isolated 00:15:09.267 RUH Desc #003: RUH Type: Initially Isolated 00:15:09.267 RUH Desc #004: RUH Type: Initially Isolated 00:15:09.267 RUH Desc #005: RUH Type: Initially Isolated 00:15:09.267 RUH Desc #006: RUH Type: Initially Isolated 00:15:09.267 RUH Desc #007: RUH Type: Initially Isolated 00:15:09.267 00:15:09.267 FDP reclaim unit handle usage log page 00:15:09.267 ====================================== 00:15:09.267 Number of Reclaim Unit Handles: 8 00:15:09.267 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:15:09.267 RUH Usage Desc #001: RUH Attributes: Unused 00:15:09.267 RUH Usage Desc #002: RUH Attributes: Unused 00:15:09.267 RUH Usage Desc #003: RUH Attributes: Unused 00:15:09.267 RUH Usage Desc #004: RUH Attributes: Unused 00:15:09.267 RUH Usage Desc #005: RUH Attributes: Unused 00:15:09.267 RUH Usage Desc #006: RUH Attributes: Unused 00:15:09.267 RUH Usage Desc #007: RUH Attributes: Unused 00:15:09.267 00:15:09.267 FDP statistics log page 00:15:09.267 ======================= 00:15:09.267 Host bytes with metadata written: 902713344 00:15:09.267 Media bytes with metadata written: 902873088 00:15:09.267 Media bytes erased: 0 00:15:09.267 00:15:09.267 FDP Reclaim unit handle status 00:15:09.267 ============================== 00:15:09.267 Number of RUHS descriptors: 2 00:15:09.267 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x000000000000031b 00:15:09.267 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:15:09.267 00:15:09.267 FDP write on placement id: 0 success 00:15:09.267 00:15:09.267 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:15:09.267 00:15:09.267 IO mgmt send: RUH update for Placement ID: #0 Success 00:15:09.267 00:15:09.267 Get Feature: FDP Events for Placement handle: #0 00:15:09.267 ======================== 00:15:09.267 Number of FDP Events: 6 00:15:09.267 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:15:09.267 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:15:09.267 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:15:09.267 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:15:09.267 FDP Event: #4 Type: Media Reallocated Enabled: No 00:15:09.267 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:15:09.267 00:15:09.267 FDP events log page 00:15:09.267 =================== 00:15:09.267 Number of FDP events: 1 00:15:09.267 FDP Event #0: 00:15:09.267 Event Type: RU Not Written to Capacity 00:15:09.267 Placement Identifier: Valid 00:15:09.267 NSID: Valid 00:15:09.267 Location: Valid 00:15:09.267 Placement Identifier: 0 00:15:09.267 Event Timestamp: 8 00:15:09.267 Namespace Identifier: 1 00:15:09.267 Reclaim Group Identifier: 0 00:15:09.267 Reclaim Unit Handle Identifier: 0 00:15:09.267 00:15:09.267 FDP test passed 00:15:09.267 00:15:09.267 real 0m0.292s 00:15:09.267 user 0m0.090s 00:15:09.267 sys 0m0.100s 00:15:09.267 ************************************ 00:15:09.267 END TEST nvme_flexible_data_placement 00:15:09.267 ************************************ 00:15:09.267 11:25:50 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:09.267 11:25:50 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:15:09.267 ************************************ 00:15:09.267 END TEST nvme_fdp 00:15:09.267 ************************************ 00:15:09.268 00:15:09.268 real 0m9.132s 00:15:09.268 user 0m1.592s 00:15:09.268 sys 0m2.531s 00:15:09.268 11:25:50 nvme_fdp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:09.268 11:25:50 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:15:09.268 11:25:50 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:15:09.268 11:25:50 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:15:09.268 11:25:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:09.268 11:25:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:09.268 11:25:50 -- common/autotest_common.sh@10 -- # set +x 00:15:09.526 ************************************ 00:15:09.526 START TEST nvme_rpc 00:15:09.526 ************************************ 00:15:09.526 11:25:50 nvme_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:15:09.526 * Looking for test storage... 00:15:09.526 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:15:09.526 11:25:51 nvme_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:09.526 11:25:51 nvme_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:15:09.526 11:25:51 nvme_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:09.526 11:25:51 nvme_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:09.526 11:25:51 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:09.526 11:25:51 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:09.526 11:25:51 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:09.526 11:25:51 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:15:09.526 11:25:51 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:15:09.526 11:25:51 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:15:09.526 11:25:51 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:15:09.526 11:25:51 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:15:09.526 11:25:51 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:15:09.526 11:25:51 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:15:09.526 11:25:51 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:09.526 11:25:51 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:15:09.526 11:25:51 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:15:09.526 11:25:51 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:09.526 11:25:51 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:09.526 11:25:51 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:15:09.526 11:25:51 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:15:09.526 11:25:51 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:09.526 11:25:51 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:15:09.526 11:25:51 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:15:09.526 11:25:51 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:15:09.526 11:25:51 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:15:09.526 11:25:51 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:09.526 11:25:51 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:15:09.526 11:25:51 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:15:09.526 11:25:51 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:09.526 11:25:51 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:09.526 11:25:51 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:15:09.526 11:25:51 nvme_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:09.526 11:25:51 nvme_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:09.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:09.526 --rc genhtml_branch_coverage=1 00:15:09.526 --rc genhtml_function_coverage=1 00:15:09.526 --rc genhtml_legend=1 00:15:09.526 --rc geninfo_all_blocks=1 00:15:09.526 --rc geninfo_unexecuted_blocks=1 00:15:09.526 00:15:09.526 ' 00:15:09.526 11:25:51 nvme_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:09.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:09.526 --rc genhtml_branch_coverage=1 00:15:09.526 --rc genhtml_function_coverage=1 00:15:09.526 --rc genhtml_legend=1 00:15:09.526 --rc geninfo_all_blocks=1 00:15:09.526 --rc geninfo_unexecuted_blocks=1 00:15:09.526 00:15:09.526 ' 00:15:09.526 11:25:51 nvme_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:09.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:09.526 --rc genhtml_branch_coverage=1 00:15:09.526 --rc genhtml_function_coverage=1 00:15:09.526 --rc genhtml_legend=1 00:15:09.526 --rc geninfo_all_blocks=1 00:15:09.526 --rc geninfo_unexecuted_blocks=1 00:15:09.526 00:15:09.526 ' 00:15:09.526 11:25:51 nvme_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:09.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:09.526 --rc genhtml_branch_coverage=1 00:15:09.526 --rc genhtml_function_coverage=1 00:15:09.526 --rc genhtml_legend=1 00:15:09.526 --rc geninfo_all_blocks=1 00:15:09.526 --rc geninfo_unexecuted_blocks=1 00:15:09.526 00:15:09.526 ' 00:15:09.526 11:25:51 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:09.526 11:25:51 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:15:09.526 11:25:51 nvme_rpc -- common/autotest_common.sh@1507 -- # bdfs=() 00:15:09.526 11:25:51 nvme_rpc -- common/autotest_common.sh@1507 -- # local bdfs 00:15:09.526 11:25:51 nvme_rpc -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:15:09.526 11:25:51 nvme_rpc -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:15:09.526 11:25:51 nvme_rpc -- common/autotest_common.sh@1496 -- # bdfs=() 00:15:09.526 11:25:51 nvme_rpc -- common/autotest_common.sh@1496 -- # local bdfs 00:15:09.526 11:25:51 nvme_rpc -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:15:09.784 11:25:51 nvme_rpc -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:09.784 11:25:51 nvme_rpc -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:15:09.784 11:25:51 nvme_rpc -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:15:09.784 11:25:51 nvme_rpc -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:15:09.784 11:25:51 nvme_rpc -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:15:09.784 11:25:51 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:15:09.784 11:25:51 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=67791 00:15:09.784 11:25:51 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:15:09.784 11:25:51 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:15:09.784 11:25:51 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 67791 00:15:09.784 11:25:51 nvme_rpc -- common/autotest_common.sh@831 -- # '[' -z 67791 ']' 00:15:09.784 11:25:51 nvme_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:09.784 11:25:51 nvme_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:09.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:09.784 11:25:51 nvme_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:09.784 11:25:51 nvme_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:09.784 11:25:51 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.784 [2024-10-07 11:25:51.458510] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:15:09.784 [2024-10-07 11:25:51.458651] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67791 ] 00:15:10.043 [2024-10-07 11:25:51.630647] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:10.301 [2024-10-07 11:25:51.856144] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:10.301 [2024-10-07 11:25:51.856174] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:15:11.234 11:25:52 nvme_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:11.234 11:25:52 nvme_rpc -- common/autotest_common.sh@864 -- # return 0 00:15:11.234 11:25:52 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:15:11.598 Nvme0n1 00:15:11.598 11:25:53 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:15:11.598 11:25:53 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:15:11.598 request: 00:15:11.598 { 00:15:11.598 "bdev_name": "Nvme0n1", 00:15:11.598 "filename": "non_existing_file", 00:15:11.598 "method": "bdev_nvme_apply_firmware", 00:15:11.598 "req_id": 1 00:15:11.598 } 00:15:11.598 Got JSON-RPC error response 00:15:11.598 response: 00:15:11.598 { 00:15:11.598 "code": -32603, 00:15:11.598 "message": "open file failed." 00:15:11.598 } 00:15:11.598 11:25:53 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:15:11.598 11:25:53 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:15:11.598 11:25:53 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:15:11.864 11:25:53 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:15:11.864 11:25:53 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 67791 00:15:11.864 11:25:53 nvme_rpc -- common/autotest_common.sh@950 -- # '[' -z 67791 ']' 00:15:11.864 11:25:53 nvme_rpc -- common/autotest_common.sh@954 -- # kill -0 67791 00:15:11.864 11:25:53 nvme_rpc -- common/autotest_common.sh@955 -- # uname 00:15:11.864 11:25:53 nvme_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:11.864 11:25:53 nvme_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67791 00:15:11.864 killing process with pid 67791 00:15:11.864 11:25:53 nvme_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:11.864 11:25:53 nvme_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:11.864 11:25:53 nvme_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67791' 00:15:11.864 11:25:53 nvme_rpc -- common/autotest_common.sh@969 -- # kill 67791 00:15:11.864 11:25:53 nvme_rpc -- common/autotest_common.sh@974 -- # wait 67791 00:15:14.411 ************************************ 00:15:14.411 END TEST nvme_rpc 00:15:14.411 ************************************ 00:15:14.411 00:15:14.411 real 0m4.987s 00:15:14.411 user 0m8.825s 00:15:14.411 sys 0m0.830s 00:15:14.411 11:25:55 nvme_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:14.411 11:25:55 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:14.411 11:25:56 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:15:14.411 11:25:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:14.411 11:25:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:14.411 11:25:56 -- common/autotest_common.sh@10 -- # set +x 00:15:14.411 ************************************ 00:15:14.411 START TEST nvme_rpc_timeouts 00:15:14.411 ************************************ 00:15:14.411 11:25:56 nvme_rpc_timeouts -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:15:14.669 * Looking for test storage... 00:15:14.669 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:15:14.669 11:25:56 nvme_rpc_timeouts -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:14.669 11:25:56 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # lcov --version 00:15:14.669 11:25:56 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:14.669 11:25:56 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:14.669 11:25:56 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:14.669 11:25:56 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:14.669 11:25:56 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:14.669 11:25:56 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:15:14.669 11:25:56 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:15:14.669 11:25:56 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:15:14.669 11:25:56 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:15:14.669 11:25:56 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:15:14.669 11:25:56 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:15:14.669 11:25:56 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:15:14.669 11:25:56 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:14.669 11:25:56 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:15:14.669 11:25:56 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:15:14.669 11:25:56 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:14.669 11:25:56 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:14.669 11:25:56 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:15:14.669 11:25:56 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:15:14.669 11:25:56 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:14.669 11:25:56 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:15:14.669 11:25:56 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:15:14.669 11:25:56 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:15:14.669 11:25:56 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:15:14.669 11:25:56 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:14.669 11:25:56 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:15:14.669 11:25:56 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:15:14.669 11:25:56 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:14.669 11:25:56 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:14.669 11:25:56 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:15:14.669 11:25:56 nvme_rpc_timeouts -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:14.669 11:25:56 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:14.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:14.669 --rc genhtml_branch_coverage=1 00:15:14.669 --rc genhtml_function_coverage=1 00:15:14.669 --rc genhtml_legend=1 00:15:14.669 --rc geninfo_all_blocks=1 00:15:14.669 --rc geninfo_unexecuted_blocks=1 00:15:14.669 00:15:14.669 ' 00:15:14.669 11:25:56 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:14.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:14.669 --rc genhtml_branch_coverage=1 00:15:14.669 --rc genhtml_function_coverage=1 00:15:14.669 --rc genhtml_legend=1 00:15:14.669 --rc geninfo_all_blocks=1 00:15:14.669 --rc geninfo_unexecuted_blocks=1 00:15:14.669 00:15:14.669 ' 00:15:14.669 11:25:56 nvme_rpc_timeouts -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:14.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:14.669 --rc genhtml_branch_coverage=1 00:15:14.669 --rc genhtml_function_coverage=1 00:15:14.669 --rc genhtml_legend=1 00:15:14.669 --rc geninfo_all_blocks=1 00:15:14.669 --rc geninfo_unexecuted_blocks=1 00:15:14.669 00:15:14.669 ' 00:15:14.669 11:25:56 nvme_rpc_timeouts -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:14.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:14.669 --rc genhtml_branch_coverage=1 00:15:14.669 --rc genhtml_function_coverage=1 00:15:14.669 --rc genhtml_legend=1 00:15:14.669 --rc geninfo_all_blocks=1 00:15:14.669 --rc geninfo_unexecuted_blocks=1 00:15:14.669 00:15:14.669 ' 00:15:14.669 11:25:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:14.669 11:25:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_67869 00:15:14.669 11:25:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_67869 00:15:14.669 11:25:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=67907 00:15:14.669 11:25:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:15:14.669 11:25:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:15:14.669 11:25:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 67907 00:15:14.669 11:25:56 nvme_rpc_timeouts -- common/autotest_common.sh@831 -- # '[' -z 67907 ']' 00:15:14.669 11:25:56 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:14.669 11:25:56 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:14.669 11:25:56 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:14.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:14.669 11:25:56 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:14.670 11:25:56 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:15:14.938 [2024-10-07 11:25:56.382643] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:15:14.938 [2024-10-07 11:25:56.382975] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67907 ] 00:15:14.938 [2024-10-07 11:25:56.560514] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:15.209 [2024-10-07 11:25:56.801627] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:15.209 [2024-10-07 11:25:56.801662] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:15:16.165 Checking default timeout settings: 00:15:16.165 11:25:57 nvme_rpc_timeouts -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:16.165 11:25:57 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # return 0 00:15:16.165 11:25:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:15:16.165 11:25:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:15:16.437 Making settings changes with rpc: 00:15:16.437 11:25:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:15:16.437 11:25:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:15:16.699 Check default vs. modified settings: 00:15:16.699 11:25:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:15:16.699 11:25:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:15:16.958 11:25:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:15:16.958 11:25:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:15:16.958 11:25:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_67869 00:15:16.958 11:25:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:15:16.958 11:25:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:15:17.218 11:25:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:15:17.218 11:25:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_67869 00:15:17.218 11:25:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:15:17.218 11:25:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:15:17.218 Setting action_on_timeout is changed as expected. 00:15:17.218 11:25:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:15:17.218 11:25:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:15:17.218 11:25:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:15:17.218 11:25:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:15:17.218 11:25:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_67869 00:15:17.218 11:25:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:15:17.218 11:25:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:15:17.218 11:25:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:15:17.218 11:25:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_67869 00:15:17.218 11:25:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:15:17.218 11:25:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:15:17.218 Setting timeout_us is changed as expected. 00:15:17.218 11:25:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:15:17.218 11:25:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:15:17.218 11:25:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:15:17.218 11:25:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:15:17.218 11:25:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_67869 00:15:17.218 11:25:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:15:17.218 11:25:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:15:17.218 11:25:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:15:17.218 11:25:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_67869 00:15:17.218 11:25:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:15:17.218 11:25:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:15:17.218 Setting timeout_admin_us is changed as expected. 00:15:17.218 11:25:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:15:17.218 11:25:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:15:17.218 11:25:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:15:17.218 11:25:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:15:17.218 11:25:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_67869 /tmp/settings_modified_67869 00:15:17.218 11:25:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 67907 00:15:17.218 11:25:58 nvme_rpc_timeouts -- common/autotest_common.sh@950 -- # '[' -z 67907 ']' 00:15:17.218 11:25:58 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # kill -0 67907 00:15:17.218 11:25:58 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # uname 00:15:17.218 11:25:58 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:17.218 11:25:58 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67907 00:15:17.218 killing process with pid 67907 00:15:17.218 11:25:58 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:17.218 11:25:58 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:17.218 11:25:58 nvme_rpc_timeouts -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67907' 00:15:17.218 11:25:58 nvme_rpc_timeouts -- common/autotest_common.sh@969 -- # kill 67907 00:15:17.218 11:25:58 nvme_rpc_timeouts -- common/autotest_common.sh@974 -- # wait 67907 00:15:19.753 RPC TIMEOUT SETTING TEST PASSED. 00:15:19.753 11:26:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:15:19.753 00:15:19.753 real 0m5.379s 00:15:19.753 user 0m9.954s 00:15:19.753 sys 0m0.823s 00:15:19.753 ************************************ 00:15:19.753 END TEST nvme_rpc_timeouts 00:15:19.753 ************************************ 00:15:19.753 11:26:01 nvme_rpc_timeouts -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:19.753 11:26:01 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:15:20.012 11:26:01 -- spdk/autotest.sh@239 -- # uname -s 00:15:20.012 11:26:01 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:15:20.012 11:26:01 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:15:20.012 11:26:01 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:20.012 11:26:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:20.012 11:26:01 -- common/autotest_common.sh@10 -- # set +x 00:15:20.012 ************************************ 00:15:20.012 START TEST sw_hotplug 00:15:20.012 ************************************ 00:15:20.012 11:26:01 sw_hotplug -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:15:20.012 * Looking for test storage... 00:15:20.012 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:15:20.012 11:26:01 sw_hotplug -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:20.012 11:26:01 sw_hotplug -- common/autotest_common.sh@1681 -- # lcov --version 00:15:20.012 11:26:01 sw_hotplug -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:20.012 11:26:01 sw_hotplug -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:20.012 11:26:01 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:20.012 11:26:01 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:20.012 11:26:01 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:20.012 11:26:01 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:15:20.012 11:26:01 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:15:20.012 11:26:01 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:15:20.012 11:26:01 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:15:20.012 11:26:01 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:15:20.012 11:26:01 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:15:20.012 11:26:01 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:15:20.012 11:26:01 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:20.012 11:26:01 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:15:20.012 11:26:01 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:15:20.012 11:26:01 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:20.012 11:26:01 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:20.012 11:26:01 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:15:20.012 11:26:01 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:15:20.012 11:26:01 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:20.012 11:26:01 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:15:20.012 11:26:01 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:15:20.012 11:26:01 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:15:20.012 11:26:01 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:15:20.270 11:26:01 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:20.270 11:26:01 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:15:20.270 11:26:01 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:15:20.270 11:26:01 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:20.270 11:26:01 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:20.270 11:26:01 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:15:20.270 11:26:01 sw_hotplug -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:20.270 11:26:01 sw_hotplug -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:20.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:20.270 --rc genhtml_branch_coverage=1 00:15:20.270 --rc genhtml_function_coverage=1 00:15:20.270 --rc genhtml_legend=1 00:15:20.270 --rc geninfo_all_blocks=1 00:15:20.270 --rc geninfo_unexecuted_blocks=1 00:15:20.270 00:15:20.270 ' 00:15:20.270 11:26:01 sw_hotplug -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:20.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:20.270 --rc genhtml_branch_coverage=1 00:15:20.270 --rc genhtml_function_coverage=1 00:15:20.270 --rc genhtml_legend=1 00:15:20.270 --rc geninfo_all_blocks=1 00:15:20.270 --rc geninfo_unexecuted_blocks=1 00:15:20.270 00:15:20.270 ' 00:15:20.270 11:26:01 sw_hotplug -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:20.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:20.270 --rc genhtml_branch_coverage=1 00:15:20.270 --rc genhtml_function_coverage=1 00:15:20.270 --rc genhtml_legend=1 00:15:20.270 --rc geninfo_all_blocks=1 00:15:20.270 --rc geninfo_unexecuted_blocks=1 00:15:20.270 00:15:20.270 ' 00:15:20.270 11:26:01 sw_hotplug -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:20.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:20.271 --rc genhtml_branch_coverage=1 00:15:20.271 --rc genhtml_function_coverage=1 00:15:20.271 --rc genhtml_legend=1 00:15:20.271 --rc geninfo_all_blocks=1 00:15:20.271 --rc geninfo_unexecuted_blocks=1 00:15:20.271 00:15:20.271 ' 00:15:20.271 11:26:01 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:20.839 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:20.839 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:20.839 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:20.839 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:20.839 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:21.113 11:26:02 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:15:21.113 11:26:02 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:15:21.113 11:26:02 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:15:21.113 11:26:02 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:15:21.113 11:26:02 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:15:21.113 11:26:02 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:15:21.113 11:26:02 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:15:21.113 11:26:02 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:15:21.113 11:26:02 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:15:21.113 11:26:02 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:15:21.113 11:26:02 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:15:21.113 11:26:02 sw_hotplug -- scripts/common.sh@233 -- # local class 00:15:21.113 11:26:02 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:15:21.113 11:26:02 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:15:21.113 11:26:02 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:15:21.113 11:26:02 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:15:21.113 11:26:02 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:15:21.113 11:26:02 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:15:21.113 11:26:02 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:15:21.113 11:26:02 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:15:21.113 11:26:02 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:15:21.113 11:26:02 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:15:21.113 11:26:02 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:15:21.113 11:26:02 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:15:21.113 11:26:02 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:15:21.113 11:26:02 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:15:21.113 11:26:02 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:15:21.113 11:26:02 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:15:21.113 11:26:02 sw_hotplug -- scripts/common.sh@18 -- # local i 00:15:21.113 11:26:02 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:15:21.113 11:26:02 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:15:21.113 11:26:02 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:15:21.113 11:26:02 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:15:21.113 11:26:02 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:15:21.113 11:26:02 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:15:21.113 11:26:02 sw_hotplug -- scripts/common.sh@18 -- # local i 00:15:21.113 11:26:02 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:15:21.113 11:26:02 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:15:21.113 11:26:02 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:15:21.113 11:26:02 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:15:21.113 11:26:02 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:15:21.113 11:26:02 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:15:21.113 11:26:02 sw_hotplug -- scripts/common.sh@18 -- # local i 00:15:21.113 11:26:02 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:15:21.113 11:26:02 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:15:21.113 11:26:02 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:15:21.113 11:26:02 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:15:21.113 11:26:02 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:15:21.113 11:26:02 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:15:21.113 11:26:02 sw_hotplug -- scripts/common.sh@18 -- # local i 00:15:21.113 11:26:02 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:15:21.113 11:26:02 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:15:21.113 11:26:02 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:15:21.113 11:26:02 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:15:21.113 11:26:02 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:15:21.113 11:26:02 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:15:21.113 11:26:02 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:15:21.113 11:26:02 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:15:21.113 11:26:02 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:15:21.113 11:26:02 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:15:21.113 11:26:02 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:15:21.113 11:26:02 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:15:21.113 11:26:02 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:15:21.113 11:26:02 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:15:21.113 11:26:02 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:15:21.113 11:26:02 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:15:21.113 11:26:02 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:15:21.113 11:26:02 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:15:21.113 11:26:02 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:15:21.113 11:26:02 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:15:21.113 11:26:02 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:15:21.113 11:26:02 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:15:21.113 11:26:02 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:15:21.113 11:26:02 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:15:21.113 11:26:02 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:15:21.113 11:26:02 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:15:21.113 11:26:02 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:15:21.113 11:26:02 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:15:21.113 11:26:02 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:21.697 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:21.957 Waiting for block devices as requested 00:15:21.957 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:15:22.216 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:15:22.216 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:15:22.216 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:15:27.489 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:15:27.489 11:26:09 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:15:27.489 11:26:09 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:28.057 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:15:28.316 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:28.316 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:15:28.575 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:15:28.833 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:15:28.833 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:15:29.093 11:26:10 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:15:29.093 11:26:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:29.093 11:26:10 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:15:29.093 11:26:10 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:15:29.093 11:26:10 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=68797 00:15:29.093 11:26:10 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:15:29.093 11:26:10 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:15:29.093 11:26:10 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:15:29.093 11:26:10 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:15:29.093 11:26:10 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:15:29.093 11:26:10 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:15:29.093 11:26:10 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:15:29.093 11:26:10 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:15:29.093 11:26:10 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 false 00:15:29.093 11:26:10 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:15:29.093 11:26:10 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:15:29.093 11:26:10 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:15:29.093 11:26:10 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:15:29.093 11:26:10 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:15:29.352 Initializing NVMe Controllers 00:15:29.352 Attaching to 0000:00:10.0 00:15:29.352 Attaching to 0000:00:11.0 00:15:29.352 Attached to 0000:00:10.0 00:15:29.352 Attached to 0000:00:11.0 00:15:29.352 Initialization complete. Starting I/O... 00:15:29.352 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:15:29.352 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:15:29.352 00:15:30.727 QEMU NVMe Ctrl (12340 ): 1236 I/Os completed (+1236) 00:15:30.727 QEMU NVMe Ctrl (12341 ): 1256 I/Os completed (+1256) 00:15:30.727 00:15:31.365 QEMU NVMe Ctrl (12340 ): 3132 I/Os completed (+1896) 00:15:31.365 QEMU NVMe Ctrl (12341 ): 3152 I/Os completed (+1896) 00:15:31.365 00:15:32.302 QEMU NVMe Ctrl (12340 ): 4844 I/Os completed (+1712) 00:15:32.302 QEMU NVMe Ctrl (12341 ): 4867 I/Os completed (+1715) 00:15:32.302 00:15:33.676 QEMU NVMe Ctrl (12340 ): 6300 I/Os completed (+1456) 00:15:33.676 QEMU NVMe Ctrl (12341 ): 6348 I/Os completed (+1481) 00:15:33.676 00:15:34.612 QEMU NVMe Ctrl (12340 ): 7857 I/Os completed (+1557) 00:15:34.612 QEMU NVMe Ctrl (12341 ): 7930 I/Os completed (+1582) 00:15:34.612 00:15:35.180 11:26:16 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:35.180 11:26:16 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:35.180 11:26:16 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:35.180 [2024-10-07 11:26:16.770321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:15:35.180 Controller removed: QEMU NVMe Ctrl (12340 ) 00:15:35.180 [2024-10-07 11:26:16.776442] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:35.180 [2024-10-07 11:26:16.776642] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:35.180 [2024-10-07 11:26:16.776715] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:35.180 [2024-10-07 11:26:16.776817] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:35.180 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:15:35.180 [2024-10-07 11:26:16.784103] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:35.180 [2024-10-07 11:26:16.784219] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:35.180 [2024-10-07 11:26:16.784259] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:35.181 [2024-10-07 11:26:16.784300] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:35.181 11:26:16 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:35.181 11:26:16 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:35.181 [2024-10-07 11:26:16.806350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:15:35.181 Controller removed: QEMU NVMe Ctrl (12341 ) 00:15:35.181 [2024-10-07 11:26:16.808276] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:35.181 [2024-10-07 11:26:16.808341] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:35.181 [2024-10-07 11:26:16.808370] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:35.181 [2024-10-07 11:26:16.808394] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:35.181 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:15:35.181 [2024-10-07 11:26:16.811221] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:35.181 [2024-10-07 11:26:16.811271] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:35.181 [2024-10-07 11:26:16.811294] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:35.181 [2024-10-07 11:26:16.811312] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:35.181 11:26:16 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:15:35.181 11:26:16 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:35.181 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:15:35.181 EAL: Scan for (pci) bus failed. 00:15:35.439 11:26:16 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:35.439 11:26:16 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:35.439 11:26:16 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:35.439 00:15:35.439 11:26:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:35.439 11:26:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:35.439 11:26:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:35.439 11:26:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:35.439 11:26:17 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:35.439 Attaching to 0000:00:10.0 00:15:35.439 Attached to 0000:00:10.0 00:15:35.698 11:26:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:35.698 11:26:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:35.698 11:26:17 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:35.698 Attaching to 0000:00:11.0 00:15:35.698 Attached to 0000:00:11.0 00:15:36.636 QEMU NVMe Ctrl (12340 ): 1385 I/Os completed (+1385) 00:15:36.636 QEMU NVMe Ctrl (12341 ): 1224 I/Os completed (+1224) 00:15:36.636 00:15:37.569 QEMU NVMe Ctrl (12340 ): 2996 I/Os completed (+1611) 00:15:37.569 QEMU NVMe Ctrl (12341 ): 2860 I/Os completed (+1636) 00:15:37.569 00:15:38.503 QEMU NVMe Ctrl (12340 ): 4740 I/Os completed (+1744) 00:15:38.503 QEMU NVMe Ctrl (12341 ): 4622 I/Os completed (+1762) 00:15:38.503 00:15:39.449 QEMU NVMe Ctrl (12340 ): 6334 I/Os completed (+1594) 00:15:39.449 QEMU NVMe Ctrl (12341 ): 6273 I/Os completed (+1651) 00:15:39.449 00:15:40.423 QEMU NVMe Ctrl (12340 ): 7763 I/Os completed (+1429) 00:15:40.423 QEMU NVMe Ctrl (12341 ): 7818 I/Os completed (+1545) 00:15:40.423 00:15:41.360 QEMU NVMe Ctrl (12340 ): 9335 I/Os completed (+1572) 00:15:41.360 QEMU NVMe Ctrl (12341 ): 9399 I/Os completed (+1581) 00:15:41.360 00:15:42.296 QEMU NVMe Ctrl (12340 ): 11219 I/Os completed (+1884) 00:15:42.296 QEMU NVMe Ctrl (12341 ): 11291 I/Os completed (+1892) 00:15:42.296 00:15:43.669 QEMU NVMe Ctrl (12340 ): 13112 I/Os completed (+1893) 00:15:43.669 QEMU NVMe Ctrl (12341 ): 13180 I/Os completed (+1889) 00:15:43.669 00:15:44.605 QEMU NVMe Ctrl (12340 ): 15260 I/Os completed (+2148) 00:15:44.605 QEMU NVMe Ctrl (12341 ): 15333 I/Os completed (+2153) 00:15:44.605 00:15:45.539 QEMU NVMe Ctrl (12340 ): 17412 I/Os completed (+2152) 00:15:45.539 QEMU NVMe Ctrl (12341 ): 17485 I/Os completed (+2152) 00:15:45.539 00:15:46.475 QEMU NVMe Ctrl (12340 ): 19532 I/Os completed (+2120) 00:15:46.475 QEMU NVMe Ctrl (12341 ): 19605 I/Os completed (+2120) 00:15:46.475 00:15:47.415 QEMU NVMe Ctrl (12340 ): 21580 I/Os completed (+2048) 00:15:47.415 QEMU NVMe Ctrl (12341 ): 21653 I/Os completed (+2048) 00:15:47.415 00:15:47.674 11:26:29 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:15:47.674 11:26:29 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:47.674 11:26:29 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:47.674 11:26:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:47.674 [2024-10-07 11:26:29.212280] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:15:47.674 Controller removed: QEMU NVMe Ctrl (12340 ) 00:15:47.674 [2024-10-07 11:26:29.216828] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:47.674 [2024-10-07 11:26:29.216901] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:47.674 [2024-10-07 11:26:29.216927] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:47.674 [2024-10-07 11:26:29.216951] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:47.674 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:15:47.674 [2024-10-07 11:26:29.220003] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:47.674 [2024-10-07 11:26:29.220068] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:47.674 [2024-10-07 11:26:29.220091] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:47.674 [2024-10-07 11:26:29.220114] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:47.674 11:26:29 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:47.674 11:26:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:47.674 [2024-10-07 11:26:29.253500] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:15:47.674 Controller removed: QEMU NVMe Ctrl (12341 ) 00:15:47.674 [2024-10-07 11:26:29.255262] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:47.674 [2024-10-07 11:26:29.255317] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:47.674 [2024-10-07 11:26:29.255349] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:47.674 [2024-10-07 11:26:29.255371] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:47.674 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:15:47.674 [2024-10-07 11:26:29.258164] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:47.674 [2024-10-07 11:26:29.258214] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:47.674 [2024-10-07 11:26:29.258236] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:47.674 [2024-10-07 11:26:29.258257] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:47.674 11:26:29 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:15:47.674 11:26:29 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:47.674 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:15:47.674 EAL: Scan for (pci) bus failed. 00:15:47.674 11:26:29 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:47.674 11:26:29 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:47.674 11:26:29 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:47.933 11:26:29 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:47.933 11:26:29 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:47.933 11:26:29 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:47.933 11:26:29 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:47.933 11:26:29 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:47.933 Attaching to 0000:00:10.0 00:15:47.933 Attached to 0000:00:10.0 00:15:47.933 11:26:29 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:47.933 11:26:29 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:47.933 11:26:29 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:47.933 Attaching to 0000:00:11.0 00:15:47.933 Attached to 0000:00:11.0 00:15:48.498 QEMU NVMe Ctrl (12340 ): 940 I/Os completed (+940) 00:15:48.498 QEMU NVMe Ctrl (12341 ): 711 I/Os completed (+711) 00:15:48.498 00:15:49.432 QEMU NVMe Ctrl (12340 ): 3020 I/Os completed (+2080) 00:15:49.432 QEMU NVMe Ctrl (12341 ): 2791 I/Os completed (+2080) 00:15:49.432 00:15:50.374 QEMU NVMe Ctrl (12340 ): 5084 I/Os completed (+2064) 00:15:50.374 QEMU NVMe Ctrl (12341 ): 4855 I/Os completed (+2064) 00:15:50.374 00:15:51.307 QEMU NVMe Ctrl (12340 ): 7160 I/Os completed (+2076) 00:15:51.307 QEMU NVMe Ctrl (12341 ): 6931 I/Os completed (+2076) 00:15:51.307 00:15:52.694 QEMU NVMe Ctrl (12340 ): 9212 I/Os completed (+2052) 00:15:52.694 QEMU NVMe Ctrl (12341 ): 8985 I/Os completed (+2054) 00:15:52.694 00:15:53.631 QEMU NVMe Ctrl (12340 ): 11308 I/Os completed (+2096) 00:15:53.631 QEMU NVMe Ctrl (12341 ): 11081 I/Os completed (+2096) 00:15:53.631 00:15:54.567 QEMU NVMe Ctrl (12340 ): 13472 I/Os completed (+2164) 00:15:54.567 QEMU NVMe Ctrl (12341 ): 13245 I/Os completed (+2164) 00:15:54.567 00:15:55.503 QEMU NVMe Ctrl (12340 ): 15652 I/Os completed (+2180) 00:15:55.503 QEMU NVMe Ctrl (12341 ): 15425 I/Os completed (+2180) 00:15:55.503 00:15:56.439 QEMU NVMe Ctrl (12340 ): 17852 I/Os completed (+2200) 00:15:56.439 QEMU NVMe Ctrl (12341 ): 17627 I/Os completed (+2202) 00:15:56.439 00:15:57.375 QEMU NVMe Ctrl (12340 ): 20024 I/Os completed (+2172) 00:15:57.375 QEMU NVMe Ctrl (12341 ): 19799 I/Os completed (+2172) 00:15:57.375 00:15:58.311 QEMU NVMe Ctrl (12340 ): 22076 I/Os completed (+2052) 00:15:58.311 QEMU NVMe Ctrl (12341 ): 21858 I/Os completed (+2059) 00:15:58.311 00:15:59.687 QEMU NVMe Ctrl (12340 ): 24228 I/Os completed (+2152) 00:15:59.687 QEMU NVMe Ctrl (12341 ): 24010 I/Os completed (+2152) 00:15:59.687 00:15:59.965 11:26:41 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:15:59.965 11:26:41 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:59.965 11:26:41 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:59.965 11:26:41 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:59.965 [2024-10-07 11:26:41.609821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:15:59.965 Controller removed: QEMU NVMe Ctrl (12340 ) 00:15:59.965 [2024-10-07 11:26:41.611986] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:59.965 [2024-10-07 11:26:41.612062] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:59.965 [2024-10-07 11:26:41.612088] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:59.965 [2024-10-07 11:26:41.612112] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:59.965 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:15:59.965 [2024-10-07 11:26:41.615262] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:59.965 [2024-10-07 11:26:41.615320] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:59.965 [2024-10-07 11:26:41.615341] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:59.965 [2024-10-07 11:26:41.615363] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:59.965 11:26:41 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:59.965 11:26:41 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:59.965 [2024-10-07 11:26:41.650856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:15:59.965 Controller removed: QEMU NVMe Ctrl (12341 ) 00:15:59.965 [2024-10-07 11:26:41.652550] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:59.965 [2024-10-07 11:26:41.652604] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:59.965 [2024-10-07 11:26:41.652631] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:59.965 [2024-10-07 11:26:41.652650] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:59.965 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:15:59.965 [2024-10-07 11:26:41.655265] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:59.965 [2024-10-07 11:26:41.655310] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:59.965 [2024-10-07 11:26:41.655333] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:59.965 [2024-10-07 11:26:41.655350] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:59.965 11:26:41 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:15:59.965 11:26:41 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:16:00.249 11:26:41 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:00.249 11:26:41 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:00.249 11:26:41 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:16:00.249 11:26:41 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:16:00.249 11:26:41 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:00.249 11:26:41 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:00.249 11:26:41 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:00.249 11:26:41 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:16:00.249 Attaching to 0000:00:10.0 00:16:00.249 Attached to 0000:00:10.0 00:16:00.507 QEMU NVMe Ctrl (12340 ): 164 I/Os completed (+164) 00:16:00.507 00:16:00.507 11:26:41 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:16:00.507 11:26:41 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:00.507 11:26:41 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:16:00.507 Attaching to 0000:00:11.0 00:16:00.507 Attached to 0000:00:11.0 00:16:00.507 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:16:00.507 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:16:00.507 [2024-10-07 11:26:42.001030] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:16:12.746 11:26:53 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:16:12.746 11:26:53 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:12.746 11:26:53 sw_hotplug -- common/autotest_common.sh@717 -- # time=43.23 00:16:12.746 11:26:53 sw_hotplug -- common/autotest_common.sh@718 -- # echo 43.23 00:16:12.746 11:26:53 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:16:12.746 11:26:53 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.23 00:16:12.746 11:26:53 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.23 2 00:16:12.746 remove_attach_helper took 43.23s to complete (handling 2 nvme drive(s)) 11:26:53 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:16:19.304 11:26:59 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 68797 00:16:19.304 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (68797) - No such process 00:16:19.304 11:27:00 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 68797 00:16:19.304 11:27:00 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:16:19.304 11:27:00 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:16:19.304 11:27:00 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:16:19.304 11:27:00 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=69337 00:16:19.304 11:27:00 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:19.304 11:27:00 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:16:19.304 11:27:00 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 69337 00:16:19.304 11:27:00 sw_hotplug -- common/autotest_common.sh@831 -- # '[' -z 69337 ']' 00:16:19.304 11:27:00 sw_hotplug -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:19.304 11:27:00 sw_hotplug -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:19.304 11:27:00 sw_hotplug -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:19.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:19.304 11:27:00 sw_hotplug -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:19.304 11:27:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:19.304 [2024-10-07 11:27:00.121798] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:16:19.304 [2024-10-07 11:27:00.121944] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69337 ] 00:16:19.304 [2024-10-07 11:27:00.296018] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:19.304 [2024-10-07 11:27:00.517195] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:16:19.872 11:27:01 sw_hotplug -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:19.872 11:27:01 sw_hotplug -- common/autotest_common.sh@864 -- # return 0 00:16:19.872 11:27:01 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:16:19.872 11:27:01 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.872 11:27:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:19.872 11:27:01 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.872 11:27:01 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:16:19.872 11:27:01 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:16:19.872 11:27:01 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:16:19.872 11:27:01 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:16:19.872 11:27:01 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:16:19.872 11:27:01 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:16:19.872 11:27:01 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:16:19.872 11:27:01 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:16:19.872 11:27:01 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:16:19.872 11:27:01 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:16:19.872 11:27:01 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:16:19.872 11:27:01 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:16:19.872 11:27:01 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:16:26.499 11:27:07 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:26.499 11:27:07 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:26.499 11:27:07 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:26.499 11:27:07 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:26.499 11:27:07 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:26.499 11:27:07 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:16:26.499 11:27:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:26.499 11:27:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:26.499 11:27:07 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:26.499 11:27:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:26.499 11:27:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:26.499 11:27:07 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.499 11:27:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:26.499 [2024-10-07 11:27:07.496011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:16:26.499 [2024-10-07 11:27:07.498698] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:26.499 [2024-10-07 11:27:07.498765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:26.499 [2024-10-07 11:27:07.498785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.499 [2024-10-07 11:27:07.498814] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:26.499 [2024-10-07 11:27:07.498827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:26.499 [2024-10-07 11:27:07.498843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.499 [2024-10-07 11:27:07.498857] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:26.499 [2024-10-07 11:27:07.498872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:26.499 [2024-10-07 11:27:07.498885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.499 [2024-10-07 11:27:07.498907] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:26.499 [2024-10-07 11:27:07.498919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:26.499 [2024-10-07 11:27:07.498935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.499 11:27:07 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.499 11:27:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:16:26.499 11:27:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:16:26.499 [2024-10-07 11:27:07.895377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:16:26.499 [2024-10-07 11:27:07.898062] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:26.499 [2024-10-07 11:27:07.898116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:26.499 [2024-10-07 11:27:07.898141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.499 [2024-10-07 11:27:07.898169] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:26.499 [2024-10-07 11:27:07.898187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:26.499 [2024-10-07 11:27:07.898203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.499 [2024-10-07 11:27:07.898223] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:26.499 [2024-10-07 11:27:07.898237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:26.499 [2024-10-07 11:27:07.898254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.499 [2024-10-07 11:27:07.898271] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:26.499 [2024-10-07 11:27:07.898287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:26.499 [2024-10-07 11:27:07.898302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.499 11:27:08 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:16:26.499 11:27:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:26.499 11:27:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:26.499 11:27:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:26.499 11:27:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:26.499 11:27:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:26.499 11:27:08 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.499 11:27:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:26.499 11:27:08 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.499 11:27:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:16:26.499 11:27:08 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:16:26.756 11:27:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:26.756 11:27:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:26.756 11:27:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:16:26.756 11:27:08 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:16:26.756 11:27:08 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:26.756 11:27:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:26.756 11:27:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:26.756 11:27:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:16:26.756 11:27:08 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:16:26.756 11:27:08 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:26.756 11:27:08 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:16:38.953 11:27:20 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:16:38.953 11:27:20 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:16:38.953 11:27:20 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:16:38.953 11:27:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:38.953 11:27:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:38.953 11:27:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:38.953 11:27:20 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.953 11:27:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:38.953 11:27:20 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.953 11:27:20 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:16:38.953 11:27:20 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:38.953 11:27:20 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:38.953 11:27:20 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:38.953 11:27:20 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:38.953 11:27:20 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:38.953 11:27:20 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:16:38.953 11:27:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:38.953 11:27:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:38.953 11:27:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:38.953 11:27:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:38.953 11:27:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:38.953 11:27:20 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.953 11:27:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:38.953 [2024-10-07 11:27:20.575321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:16:38.953 [2024-10-07 11:27:20.577932] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:38.953 [2024-10-07 11:27:20.577983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:38.953 [2024-10-07 11:27:20.578001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.953 [2024-10-07 11:27:20.578030] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:38.953 [2024-10-07 11:27:20.578042] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:38.953 [2024-10-07 11:27:20.578057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.953 [2024-10-07 11:27:20.578071] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:38.953 [2024-10-07 11:27:20.578085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:38.953 [2024-10-07 11:27:20.578098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.953 [2024-10-07 11:27:20.578122] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:38.953 [2024-10-07 11:27:20.578134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:38.953 [2024-10-07 11:27:20.578149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.953 11:27:20 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.953 11:27:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:16:38.953 11:27:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:16:39.526 [2024-10-07 11:27:20.974702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:16:39.526 [2024-10-07 11:27:20.977375] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:39.526 [2024-10-07 11:27:20.977424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:39.526 [2024-10-07 11:27:20.977446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:39.526 [2024-10-07 11:27:20.977471] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:39.526 [2024-10-07 11:27:20.977486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:39.526 [2024-10-07 11:27:20.977497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:39.526 [2024-10-07 11:27:20.977514] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:39.526 [2024-10-07 11:27:20.977525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:39.526 [2024-10-07 11:27:20.977539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:39.526 [2024-10-07 11:27:20.977552] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:39.526 [2024-10-07 11:27:20.977566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:39.526 [2024-10-07 11:27:20.977577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:39.526 11:27:21 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:16:39.526 11:27:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:39.526 11:27:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:39.526 11:27:21 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:39.526 11:27:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:39.526 11:27:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:39.526 11:27:21 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.526 11:27:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:39.526 11:27:21 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.526 11:27:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:16:39.526 11:27:21 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:16:39.784 11:27:21 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:39.784 11:27:21 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:39.784 11:27:21 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:16:39.784 11:27:21 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:16:39.784 11:27:21 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:39.784 11:27:21 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:39.784 11:27:21 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:39.784 11:27:21 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:16:39.784 11:27:21 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:16:39.784 11:27:21 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:39.784 11:27:21 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:16:51.984 11:27:33 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:16:51.984 11:27:33 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:16:51.984 11:27:33 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:16:51.984 11:27:33 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:51.984 11:27:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:51.984 11:27:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:51.984 11:27:33 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.984 11:27:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:51.984 11:27:33 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.984 11:27:33 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:16:51.984 11:27:33 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:51.984 11:27:33 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:51.984 11:27:33 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:51.984 [2024-10-07 11:27:33.554481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:16:51.984 [2024-10-07 11:27:33.557557] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:51.984 [2024-10-07 11:27:33.557609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:51.984 [2024-10-07 11:27:33.557628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.984 [2024-10-07 11:27:33.557656] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:51.984 [2024-10-07 11:27:33.557669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:51.984 [2024-10-07 11:27:33.557687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.984 [2024-10-07 11:27:33.557701] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:51.984 [2024-10-07 11:27:33.557715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:51.984 [2024-10-07 11:27:33.557727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.984 [2024-10-07 11:27:33.557755] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:51.984 [2024-10-07 11:27:33.557768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:51.984 [2024-10-07 11:27:33.557782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.984 11:27:33 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:51.984 11:27:33 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:51.984 11:27:33 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:16:51.984 11:27:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:51.984 11:27:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:51.984 11:27:33 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:51.984 11:27:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:51.984 11:27:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:51.984 11:27:33 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.984 11:27:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:51.984 11:27:33 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.984 11:27:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:16:51.984 11:27:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:16:52.569 [2024-10-07 11:27:34.053692] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:16:52.569 [2024-10-07 11:27:34.056483] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:52.569 [2024-10-07 11:27:34.056528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:52.569 [2024-10-07 11:27:34.056552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.569 [2024-10-07 11:27:34.056575] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:52.569 [2024-10-07 11:27:34.056592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:52.569 [2024-10-07 11:27:34.056605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.569 [2024-10-07 11:27:34.056624] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:52.569 [2024-10-07 11:27:34.056635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:52.569 [2024-10-07 11:27:34.056658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.569 [2024-10-07 11:27:34.056672] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:52.569 [2024-10-07 11:27:34.056688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:52.569 [2024-10-07 11:27:34.056700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.569 11:27:34 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:16:52.569 11:27:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:52.569 11:27:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:52.569 11:27:34 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:52.569 11:27:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:52.569 11:27:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:52.569 11:27:34 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.569 11:27:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:52.569 11:27:34 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.569 11:27:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:16:52.569 11:27:34 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:16:52.827 11:27:34 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:52.827 11:27:34 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:52.828 11:27:34 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:16:52.828 11:27:34 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:16:52.828 11:27:34 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:52.828 11:27:34 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:52.828 11:27:34 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:52.828 11:27:34 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:16:52.828 11:27:34 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:16:52.828 11:27:34 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:52.828 11:27:34 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:17:05.030 11:27:46 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:17:05.030 11:27:46 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:17:05.030 11:27:46 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:17:05.030 11:27:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:05.030 11:27:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:05.030 11:27:46 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.030 11:27:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:05.030 11:27:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:05.030 11:27:46 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.030 11:27:46 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:17:05.030 11:27:46 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:17:05.030 11:27:46 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.14 00:17:05.030 11:27:46 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.14 00:17:05.030 11:27:46 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:17:05.030 11:27:46 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.14 00:17:05.030 11:27:46 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.14 2 00:17:05.030 remove_attach_helper took 45.14s to complete (handling 2 nvme drive(s)) 11:27:46 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:17:05.030 11:27:46 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.030 11:27:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:05.030 11:27:46 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.030 11:27:46 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:17:05.030 11:27:46 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.030 11:27:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:05.030 11:27:46 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.030 11:27:46 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:17:05.030 11:27:46 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:17:05.030 11:27:46 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:17:05.030 11:27:46 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:17:05.030 11:27:46 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:17:05.030 11:27:46 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:17:05.030 11:27:46 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:17:05.030 11:27:46 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:17:05.030 11:27:46 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:17:05.030 11:27:46 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:17:05.030 11:27:46 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:17:05.030 11:27:46 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:17:05.030 11:27:46 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:17:11.595 11:27:52 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:17:11.595 11:27:52 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:11.595 11:27:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:11.595 11:27:52 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:11.595 11:27:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:11.595 11:27:52 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:17:11.595 11:27:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:11.595 11:27:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:11.595 11:27:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:11.595 11:27:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:11.595 11:27:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:11.595 11:27:52 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.595 11:27:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:11.595 [2024-10-07 11:27:52.674118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:17:11.595 [2024-10-07 11:27:52.676763] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:11.595 [2024-10-07 11:27:52.676810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:11.595 [2024-10-07 11:27:52.676828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:11.595 [2024-10-07 11:27:52.676856] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:11.595 [2024-10-07 11:27:52.676868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:11.595 [2024-10-07 11:27:52.676882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:11.595 [2024-10-07 11:27:52.676896] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:11.595 [2024-10-07 11:27:52.676909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:11.595 [2024-10-07 11:27:52.676921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:11.595 [2024-10-07 11:27:52.676936] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:11.595 [2024-10-07 11:27:52.676947] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:11.595 [2024-10-07 11:27:52.676965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:11.595 11:27:52 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.595 11:27:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:17:11.595 11:27:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:17:11.595 [2024-10-07 11:27:53.073494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:17:11.595 [2024-10-07 11:27:53.075518] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:11.595 [2024-10-07 11:27:53.075564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:11.595 [2024-10-07 11:27:53.075588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:11.595 [2024-10-07 11:27:53.075614] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:11.595 [2024-10-07 11:27:53.075633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:11.595 [2024-10-07 11:27:53.075646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:11.595 [2024-10-07 11:27:53.075665] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:11.595 [2024-10-07 11:27:53.075677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:11.595 [2024-10-07 11:27:53.075695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:11.595 [2024-10-07 11:27:53.075708] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:11.595 [2024-10-07 11:27:53.075724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:11.595 [2024-10-07 11:27:53.075736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:11.595 11:27:53 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:17:11.595 11:27:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:11.595 11:27:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:11.595 11:27:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:11.595 11:27:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:11.595 11:27:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:11.595 11:27:53 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.595 11:27:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:11.595 11:27:53 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.595 11:27:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:17:11.595 11:27:53 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:17:11.854 11:27:53 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:11.854 11:27:53 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:11.854 11:27:53 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:17:11.854 11:27:53 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:17:11.854 11:27:53 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:11.854 11:27:53 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:11.854 11:27:53 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:11.854 11:27:53 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:17:11.854 11:27:53 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:17:12.112 11:27:53 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:12.112 11:27:53 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:17:24.540 11:28:05 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:17:24.540 11:28:05 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:17:24.540 11:28:05 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:17:24.540 11:28:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:24.540 11:28:05 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.540 11:28:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:24.540 11:28:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:24.540 11:28:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:24.540 11:28:05 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.540 11:28:05 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:17:24.540 11:28:05 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:17:24.540 11:28:05 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:24.540 11:28:05 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:24.540 [2024-10-07 11:28:05.653251] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:17:24.540 [2024-10-07 11:28:05.656235] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:24.540 [2024-10-07 11:28:05.656290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:24.540 [2024-10-07 11:28:05.656309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:24.540 [2024-10-07 11:28:05.656336] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:24.540 [2024-10-07 11:28:05.656348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:24.540 [2024-10-07 11:28:05.656362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:24.540 [2024-10-07 11:28:05.656375] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:24.540 [2024-10-07 11:28:05.656390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:24.540 [2024-10-07 11:28:05.656401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:24.540 [2024-10-07 11:28:05.656417] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:24.541 [2024-10-07 11:28:05.656428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:24.541 [2024-10-07 11:28:05.656442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:24.541 11:28:05 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:24.541 11:28:05 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:24.541 11:28:05 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:17:24.541 11:28:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:24.541 11:28:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:24.541 11:28:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:24.541 11:28:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:24.541 11:28:05 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.541 11:28:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:24.541 11:28:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:24.541 11:28:05 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.541 11:28:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:17:24.541 11:28:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:17:24.541 [2024-10-07 11:28:06.152461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:17:24.541 [2024-10-07 11:28:06.154416] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:24.541 [2024-10-07 11:28:06.154459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:24.541 [2024-10-07 11:28:06.154480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:24.541 [2024-10-07 11:28:06.154518] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:24.541 [2024-10-07 11:28:06.154538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:24.541 [2024-10-07 11:28:06.154551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:24.541 [2024-10-07 11:28:06.154567] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:24.541 [2024-10-07 11:28:06.154579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:24.541 [2024-10-07 11:28:06.154594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:24.541 [2024-10-07 11:28:06.154609] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:24.541 [2024-10-07 11:28:06.154623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:24.541 [2024-10-07 11:28:06.154635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:24.799 11:28:06 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:17:24.799 11:28:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:24.799 11:28:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:24.799 11:28:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:24.799 11:28:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:24.799 11:28:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:24.799 11:28:06 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.799 11:28:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:24.799 11:28:06 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.799 11:28:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:17:24.799 11:28:06 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:17:24.799 11:28:06 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:24.799 11:28:06 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:24.799 11:28:06 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:17:25.058 11:28:06 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:17:25.058 11:28:06 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:25.058 11:28:06 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:25.058 11:28:06 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:25.058 11:28:06 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:17:25.058 11:28:06 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:17:25.058 11:28:06 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:25.058 11:28:06 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:17:37.268 11:28:18 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:17:37.268 11:28:18 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:17:37.268 11:28:18 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:17:37.268 11:28:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:37.268 11:28:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:37.268 11:28:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:37.268 11:28:18 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.268 11:28:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:37.268 11:28:18 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.268 11:28:18 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:17:37.268 11:28:18 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:17:37.268 11:28:18 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:37.268 11:28:18 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:37.268 [2024-10-07 11:28:18.732235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:17:37.268 [2024-10-07 11:28:18.737545] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:37.268 [2024-10-07 11:28:18.737602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:37.268 [2024-10-07 11:28:18.737620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:37.268 [2024-10-07 11:28:18.737649] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:37.268 [2024-10-07 11:28:18.737661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:37.268 [2024-10-07 11:28:18.737677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:37.268 [2024-10-07 11:28:18.737690] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:37.268 [2024-10-07 11:28:18.737708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:37.268 [2024-10-07 11:28:18.737720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:37.268 [2024-10-07 11:28:18.737736] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:37.268 [2024-10-07 11:28:18.737759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:37.268 [2024-10-07 11:28:18.737774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:37.268 11:28:18 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:37.268 11:28:18 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:37.268 11:28:18 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:17:37.268 11:28:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:37.268 11:28:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:37.268 11:28:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:37.268 11:28:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:37.268 11:28:18 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.268 11:28:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:37.268 11:28:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:37.268 11:28:18 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.268 11:28:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:17:37.268 11:28:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:17:37.526 [2024-10-07 11:28:19.231450] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:17:37.526 [2024-10-07 11:28:19.233356] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:37.526 [2024-10-07 11:28:19.233400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:37.526 [2024-10-07 11:28:19.233420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:37.526 [2024-10-07 11:28:19.233444] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:37.526 [2024-10-07 11:28:19.233459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:37.526 [2024-10-07 11:28:19.233472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:37.526 [2024-10-07 11:28:19.233487] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:37.526 [2024-10-07 11:28:19.233499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:37.526 [2024-10-07 11:28:19.233513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:37.526 [2024-10-07 11:28:19.233526] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:37.526 [2024-10-07 11:28:19.233546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:37.526 [2024-10-07 11:28:19.233558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:37.783 11:28:19 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:17:37.783 11:28:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:37.783 11:28:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:37.783 11:28:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:37.783 11:28:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:37.783 11:28:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:37.783 11:28:19 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.783 11:28:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:37.783 11:28:19 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.783 11:28:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:17:37.783 11:28:19 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:17:38.042 11:28:19 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:38.042 11:28:19 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:38.042 11:28:19 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:17:38.042 11:28:19 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:17:38.042 11:28:19 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:38.042 11:28:19 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:38.042 11:28:19 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:38.042 11:28:19 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:17:38.042 11:28:19 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:17:38.042 11:28:19 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:38.042 11:28:19 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:17:50.246 11:28:31 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:17:50.246 11:28:31 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:17:50.246 11:28:31 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:17:50.246 11:28:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:50.246 11:28:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:50.246 11:28:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:50.246 11:28:31 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.246 11:28:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:50.246 11:28:31 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.246 11:28:31 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:17:50.246 11:28:31 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:17:50.246 11:28:31 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.20 00:17:50.246 11:28:31 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.20 00:17:50.246 11:28:31 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:17:50.246 11:28:31 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.20 00:17:50.246 11:28:31 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.20 2 00:17:50.246 remove_attach_helper took 45.20s to complete (handling 2 nvme drive(s)) 11:28:31 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:17:50.246 11:28:31 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 69337 00:17:50.246 11:28:31 sw_hotplug -- common/autotest_common.sh@950 -- # '[' -z 69337 ']' 00:17:50.246 11:28:31 sw_hotplug -- common/autotest_common.sh@954 -- # kill -0 69337 00:17:50.246 11:28:31 sw_hotplug -- common/autotest_common.sh@955 -- # uname 00:17:50.246 11:28:31 sw_hotplug -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:50.246 11:28:31 sw_hotplug -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69337 00:17:50.246 killing process with pid 69337 00:17:50.246 11:28:31 sw_hotplug -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:50.246 11:28:31 sw_hotplug -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:50.246 11:28:31 sw_hotplug -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69337' 00:17:50.246 11:28:31 sw_hotplug -- common/autotest_common.sh@969 -- # kill 69337 00:17:50.246 11:28:31 sw_hotplug -- common/autotest_common.sh@974 -- # wait 69337 00:17:52.778 11:28:34 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:53.345 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:53.952 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:53.952 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:53.952 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:17:54.214 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:17:54.214 00:17:54.214 real 2m34.241s 00:17:54.214 user 1m52.004s 00:17:54.214 sys 0m22.505s 00:17:54.214 11:28:35 sw_hotplug -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:54.214 11:28:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:54.214 ************************************ 00:17:54.214 END TEST sw_hotplug 00:17:54.214 ************************************ 00:17:54.214 11:28:35 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:17:54.214 11:28:35 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:17:54.214 11:28:35 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:54.214 11:28:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:54.214 11:28:35 -- common/autotest_common.sh@10 -- # set +x 00:17:54.214 ************************************ 00:17:54.214 START TEST nvme_xnvme 00:17:54.214 ************************************ 00:17:54.214 11:28:35 nvme_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:17:54.214 * Looking for test storage... 00:17:54.473 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:17:54.473 11:28:35 nvme_xnvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:54.473 11:28:35 nvme_xnvme -- common/autotest_common.sh@1681 -- # lcov --version 00:17:54.473 11:28:35 nvme_xnvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:54.473 11:28:36 nvme_xnvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:54.473 11:28:36 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:54.473 11:28:36 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:54.473 11:28:36 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:54.473 11:28:36 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:17:54.473 11:28:36 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:17:54.473 11:28:36 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:17:54.473 11:28:36 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:17:54.473 11:28:36 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:17:54.473 11:28:36 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:17:54.473 11:28:36 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:17:54.473 11:28:36 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:54.473 11:28:36 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:17:54.473 11:28:36 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:17:54.473 11:28:36 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:54.473 11:28:36 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:54.473 11:28:36 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:17:54.473 11:28:36 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:17:54.473 11:28:36 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:54.473 11:28:36 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:17:54.473 11:28:36 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:17:54.473 11:28:36 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:17:54.473 11:28:36 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:17:54.473 11:28:36 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:54.473 11:28:36 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:17:54.473 11:28:36 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:17:54.473 11:28:36 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:54.473 11:28:36 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:54.473 11:28:36 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:17:54.473 11:28:36 nvme_xnvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:54.473 11:28:36 nvme_xnvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:54.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.473 --rc genhtml_branch_coverage=1 00:17:54.473 --rc genhtml_function_coverage=1 00:17:54.473 --rc genhtml_legend=1 00:17:54.473 --rc geninfo_all_blocks=1 00:17:54.473 --rc geninfo_unexecuted_blocks=1 00:17:54.473 00:17:54.473 ' 00:17:54.473 11:28:36 nvme_xnvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:54.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.473 --rc genhtml_branch_coverage=1 00:17:54.473 --rc genhtml_function_coverage=1 00:17:54.473 --rc genhtml_legend=1 00:17:54.473 --rc geninfo_all_blocks=1 00:17:54.473 --rc geninfo_unexecuted_blocks=1 00:17:54.473 00:17:54.473 ' 00:17:54.473 11:28:36 nvme_xnvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:54.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.473 --rc genhtml_branch_coverage=1 00:17:54.473 --rc genhtml_function_coverage=1 00:17:54.473 --rc genhtml_legend=1 00:17:54.473 --rc geninfo_all_blocks=1 00:17:54.473 --rc geninfo_unexecuted_blocks=1 00:17:54.473 00:17:54.473 ' 00:17:54.473 11:28:36 nvme_xnvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:54.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.473 --rc genhtml_branch_coverage=1 00:17:54.473 --rc genhtml_function_coverage=1 00:17:54.473 --rc genhtml_legend=1 00:17:54.473 --rc geninfo_all_blocks=1 00:17:54.473 --rc geninfo_unexecuted_blocks=1 00:17:54.473 00:17:54.473 ' 00:17:54.474 11:28:36 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:54.474 11:28:36 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:17:54.474 11:28:36 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:54.474 11:28:36 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:54.474 11:28:36 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:54.474 11:28:36 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.474 11:28:36 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.474 11:28:36 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.474 11:28:36 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:17:54.474 11:28:36 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.474 11:28:36 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:17:54.474 11:28:36 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:54.474 11:28:36 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:54.474 11:28:36 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:54.474 ************************************ 00:17:54.474 START TEST xnvme_to_malloc_dd_copy 00:17:54.474 ************************************ 00:17:54.474 11:28:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1125 -- # malloc_to_xnvme_copy 00:17:54.474 11:28:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:17:54.474 11:28:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:17:54.474 11:28:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:17:54.474 11:28:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:17:54.474 11:28:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:17:54.474 11:28:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:17:54.474 11:28:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:17:54.474 11:28:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:17:54.474 11:28:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:17:54.474 11:28:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:17:54.474 11:28:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:17:54.474 11:28:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:17:54.474 11:28:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:17:54.474 11:28:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:17:54.474 11:28:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:17:54.474 11:28:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:17:54.474 11:28:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:17:54.474 11:28:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:17:54.474 11:28:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:17:54.474 11:28:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:17:54.474 11:28:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:17:54.474 11:28:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:17:54.474 11:28:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:17:54.474 11:28:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:17:54.474 { 00:17:54.474 "subsystems": [ 00:17:54.474 { 00:17:54.474 "subsystem": "bdev", 00:17:54.474 "config": [ 00:17:54.474 { 00:17:54.474 "params": { 00:17:54.474 "block_size": 512, 00:17:54.474 "num_blocks": 2097152, 00:17:54.474 "name": "malloc0" 00:17:54.474 }, 00:17:54.474 "method": "bdev_malloc_create" 00:17:54.474 }, 00:17:54.474 { 00:17:54.474 "params": { 00:17:54.474 "io_mechanism": "libaio", 00:17:54.474 "filename": "/dev/nullb0", 00:17:54.474 "name": "null0" 00:17:54.474 }, 00:17:54.474 "method": "bdev_xnvme_create" 00:17:54.474 }, 00:17:54.474 { 00:17:54.474 "method": "bdev_wait_for_examine" 00:17:54.474 } 00:17:54.474 ] 00:17:54.474 } 00:17:54.474 ] 00:17:54.474 } 00:17:54.732 [2024-10-07 11:28:36.190821] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:17:54.732 [2024-10-07 11:28:36.191074] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70710 ] 00:17:54.732 [2024-10-07 11:28:36.363684] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.991 [2024-10-07 11:28:36.577713] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:57.525  [2024-10-07T11:28:40.170Z] Copying: 258/1024 [MB] (258 MBps) [2024-10-07T11:28:41.104Z] Copying: 514/1024 [MB] (255 MBps) [2024-10-07T11:28:42.037Z] Copying: 771/1024 [MB] (257 MBps) [2024-10-07T11:28:46.224Z] Copying: 1024/1024 [MB] (average 257 MBps) 00:18:04.513 00:18:04.772 11:28:46 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:18:04.772 11:28:46 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:18:04.772 11:28:46 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:18:04.772 11:28:46 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:18:04.772 { 00:18:04.772 "subsystems": [ 00:18:04.772 { 00:18:04.772 "subsystem": "bdev", 00:18:04.772 "config": [ 00:18:04.772 { 00:18:04.772 "params": { 00:18:04.772 "block_size": 512, 00:18:04.772 "num_blocks": 2097152, 00:18:04.772 "name": "malloc0" 00:18:04.772 }, 00:18:04.772 "method": "bdev_malloc_create" 00:18:04.772 }, 00:18:04.772 { 00:18:04.772 "params": { 00:18:04.772 "io_mechanism": "libaio", 00:18:04.772 "filename": "/dev/nullb0", 00:18:04.772 "name": "null0" 00:18:04.772 }, 00:18:04.772 "method": "bdev_xnvme_create" 00:18:04.772 }, 00:18:04.772 { 00:18:04.772 "method": "bdev_wait_for_examine" 00:18:04.772 } 00:18:04.772 ] 00:18:04.772 } 00:18:04.772 ] 00:18:04.772 } 00:18:04.772 [2024-10-07 11:28:46.339105] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:18:04.772 [2024-10-07 11:28:46.339276] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70821 ] 00:18:05.031 [2024-10-07 11:28:46.530114] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.291 [2024-10-07 11:28:46.753995] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:07.888  [2024-10-07T11:28:50.534Z] Copying: 253/1024 [MB] (253 MBps) [2024-10-07T11:28:51.513Z] Copying: 506/1024 [MB] (253 MBps) [2024-10-07T11:28:52.455Z] Copying: 761/1024 [MB] (254 MBps) [2024-10-07T11:28:52.455Z] Copying: 1020/1024 [MB] (258 MBps) [2024-10-07T11:28:56.646Z] Copying: 1024/1024 [MB] (average 255 MBps) 00:18:14.935 00:18:14.935 11:28:56 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:18:14.935 11:28:56 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:18:14.935 11:28:56 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:18:14.935 11:28:56 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:18:14.935 11:28:56 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:18:14.935 11:28:56 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:18:14.935 { 00:18:14.935 "subsystems": [ 00:18:14.935 { 00:18:14.935 "subsystem": "bdev", 00:18:14.935 "config": [ 00:18:14.935 { 00:18:14.935 "params": { 00:18:14.935 "block_size": 512, 00:18:14.935 "num_blocks": 2097152, 00:18:14.935 "name": "malloc0" 00:18:14.935 }, 00:18:14.935 "method": "bdev_malloc_create" 00:18:14.935 }, 00:18:14.935 { 00:18:14.935 "params": { 00:18:14.935 "io_mechanism": "io_uring", 00:18:14.935 "filename": "/dev/nullb0", 00:18:14.935 "name": "null0" 00:18:14.935 }, 00:18:14.935 "method": "bdev_xnvme_create" 00:18:14.935 }, 00:18:14.935 { 00:18:14.935 "method": "bdev_wait_for_examine" 00:18:14.935 } 00:18:14.935 ] 00:18:14.935 } 00:18:14.935 ] 00:18:14.935 } 00:18:14.935 [2024-10-07 11:28:56.446060] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:18:14.935 [2024-10-07 11:28:56.446209] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70941 ] 00:18:14.935 [2024-10-07 11:28:56.620705] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.194 [2024-10-07 11:28:56.851048] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:17.726  [2024-10-07T11:29:00.374Z] Copying: 268/1024 [MB] (268 MBps) [2024-10-07T11:29:01.749Z] Copying: 537/1024 [MB] (269 MBps) [2024-10-07T11:29:02.316Z] Copying: 807/1024 [MB] (270 MBps) [2024-10-07T11:29:06.503Z] Copying: 1024/1024 [MB] (average 269 MBps) 00:18:24.792 00:18:24.792 11:29:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:18:24.792 11:29:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:18:24.792 11:29:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:18:24.792 11:29:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:18:24.792 { 00:18:24.792 "subsystems": [ 00:18:24.792 { 00:18:24.792 "subsystem": "bdev", 00:18:24.792 "config": [ 00:18:24.792 { 00:18:24.792 "params": { 00:18:24.792 "block_size": 512, 00:18:24.792 "num_blocks": 2097152, 00:18:24.792 "name": "malloc0" 00:18:24.792 }, 00:18:24.792 "method": "bdev_malloc_create" 00:18:24.792 }, 00:18:24.792 { 00:18:24.792 "params": { 00:18:24.792 "io_mechanism": "io_uring", 00:18:24.792 "filename": "/dev/nullb0", 00:18:24.792 "name": "null0" 00:18:24.792 }, 00:18:24.792 "method": "bdev_xnvme_create" 00:18:24.792 }, 00:18:24.792 { 00:18:24.792 "method": "bdev_wait_for_examine" 00:18:24.792 } 00:18:24.792 ] 00:18:24.792 } 00:18:24.792 ] 00:18:24.792 } 00:18:24.792 [2024-10-07 11:29:06.375436] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:18:24.792 [2024-10-07 11:29:06.375568] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71056 ] 00:18:25.050 [2024-10-07 11:29:06.540018] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.343 [2024-10-07 11:29:06.760651] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:27.898  [2024-10-07T11:29:10.541Z] Copying: 272/1024 [MB] (272 MBps) [2024-10-07T11:29:11.474Z] Copying: 543/1024 [MB] (271 MBps) [2024-10-07T11:29:12.057Z] Copying: 816/1024 [MB] (273 MBps) [2024-10-07T11:29:16.241Z] Copying: 1024/1024 [MB] (average 272 MBps) 00:18:34.530 00:18:34.530 11:29:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:18:34.530 11:29:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:18:34.530 00:18:34.530 real 0m40.081s 00:18:34.530 user 0m35.178s 00:18:34.530 sys 0m4.381s 00:18:34.530 11:29:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:34.530 ************************************ 00:18:34.530 END TEST xnvme_to_malloc_dd_copy 00:18:34.530 11:29:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:18:34.530 ************************************ 00:18:34.530 11:29:16 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:18:34.530 11:29:16 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:18:34.530 11:29:16 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:34.530 11:29:16 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:34.530 ************************************ 00:18:34.530 START TEST xnvme_bdevperf 00:18:34.530 ************************************ 00:18:34.530 11:29:16 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1125 -- # xnvme_bdevperf 00:18:34.530 11:29:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:18:34.530 11:29:16 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:18:34.530 11:29:16 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:18:34.790 11:29:16 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:18:34.790 11:29:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:18:34.790 11:29:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:18:34.790 11:29:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:18:34.790 11:29:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:18:34.790 11:29:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:18:34.790 11:29:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:18:34.790 11:29:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:18:34.790 11:29:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:18:34.790 11:29:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:18:34.790 11:29:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:18:34.790 11:29:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:18:34.790 11:29:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:18:34.790 11:29:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:18:34.790 11:29:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:18:34.790 11:29:16 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:34.790 11:29:16 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:34.790 { 00:18:34.790 "subsystems": [ 00:18:34.790 { 00:18:34.790 "subsystem": "bdev", 00:18:34.790 "config": [ 00:18:34.790 { 00:18:34.790 "params": { 00:18:34.790 "io_mechanism": "libaio", 00:18:34.790 "filename": "/dev/nullb0", 00:18:34.790 "name": "null0" 00:18:34.790 }, 00:18:34.790 "method": "bdev_xnvme_create" 00:18:34.790 }, 00:18:34.790 { 00:18:34.790 "method": "bdev_wait_for_examine" 00:18:34.790 } 00:18:34.790 ] 00:18:34.790 } 00:18:34.790 ] 00:18:34.790 } 00:18:34.790 [2024-10-07 11:29:16.331526] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:18:34.790 [2024-10-07 11:29:16.331829] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71188 ] 00:18:35.048 [2024-10-07 11:29:16.503189] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:35.048 [2024-10-07 11:29:16.723877] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:35.614 Running I/O for 5 seconds... 00:18:37.481 153984.00 IOPS, 601.50 MiB/s [2024-10-07T11:29:20.121Z] 148512.00 IOPS, 580.12 MiB/s [2024-10-07T11:29:21.495Z] 149141.33 IOPS, 582.58 MiB/s [2024-10-07T11:29:22.427Z] 150112.00 IOPS, 586.38 MiB/s [2024-10-07T11:29:22.427Z] 150720.00 IOPS, 588.75 MiB/s 00:18:40.716 Latency(us) 00:18:40.716 [2024-10-07T11:29:22.427Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:40.716 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:18:40.716 null0 : 5.00 150665.42 588.54 0.00 0.00 422.29 403.02 1987.14 00:18:40.716 [2024-10-07T11:29:22.427Z] =================================================================================================================== 00:18:40.716 [2024-10-07T11:29:22.427Z] Total : 150665.42 588.54 0.00 0.00 422.29 403.02 1987.14 00:18:42.092 11:29:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:18:42.092 11:29:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:18:42.092 11:29:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:18:42.092 11:29:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:18:42.092 11:29:23 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:42.092 11:29:23 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:42.092 { 00:18:42.092 "subsystems": [ 00:18:42.092 { 00:18:42.092 "subsystem": "bdev", 00:18:42.092 "config": [ 00:18:42.092 { 00:18:42.092 "params": { 00:18:42.092 "io_mechanism": "io_uring", 00:18:42.092 "filename": "/dev/nullb0", 00:18:42.092 "name": "null0" 00:18:42.092 }, 00:18:42.092 "method": "bdev_xnvme_create" 00:18:42.092 }, 00:18:42.092 { 00:18:42.092 "method": "bdev_wait_for_examine" 00:18:42.092 } 00:18:42.092 ] 00:18:42.092 } 00:18:42.092 ] 00:18:42.092 } 00:18:42.092 [2024-10-07 11:29:23.483108] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:18:42.092 [2024-10-07 11:29:23.483235] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71274 ] 00:18:42.092 [2024-10-07 11:29:23.654611] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:42.350 [2024-10-07 11:29:23.873980] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:42.608 Running I/O for 5 seconds... 00:18:44.939 200576.00 IOPS, 783.50 MiB/s [2024-10-07T11:29:27.216Z] 197280.00 IOPS, 770.62 MiB/s [2024-10-07T11:29:28.591Z] 197568.00 IOPS, 771.75 MiB/s [2024-10-07T11:29:29.535Z] 196848.00 IOPS, 768.94 MiB/s 00:18:47.824 Latency(us) 00:18:47.824 [2024-10-07T11:29:29.535Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:47.824 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:18:47.824 null0 : 5.00 196631.28 768.09 0.00 0.00 323.04 230.30 1763.42 00:18:47.824 [2024-10-07T11:29:29.535Z] =================================================================================================================== 00:18:47.824 [2024-10-07T11:29:29.535Z] Total : 196631.28 768.09 0.00 0.00 323.04 230.30 1763.42 00:18:49.200 11:29:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:18:49.200 11:29:30 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:18:49.200 00:18:49.200 real 0m14.336s 00:18:49.200 user 0m10.817s 00:18:49.200 sys 0m3.315s 00:18:49.200 11:29:30 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:49.200 11:29:30 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:49.200 ************************************ 00:18:49.200 END TEST xnvme_bdevperf 00:18:49.200 ************************************ 00:18:49.200 ************************************ 00:18:49.200 END TEST nvme_xnvme 00:18:49.200 ************************************ 00:18:49.200 00:18:49.200 real 0m54.809s 00:18:49.200 user 0m46.186s 00:18:49.200 sys 0m7.898s 00:18:49.200 11:29:30 nvme_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:49.200 11:29:30 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:49.200 11:29:30 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:18:49.200 11:29:30 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:49.200 11:29:30 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:49.200 11:29:30 -- common/autotest_common.sh@10 -- # set +x 00:18:49.200 ************************************ 00:18:49.200 START TEST blockdev_xnvme 00:18:49.200 ************************************ 00:18:49.200 11:29:30 blockdev_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:18:49.200 * Looking for test storage... 00:18:49.200 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:18:49.200 11:29:30 blockdev_xnvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:49.200 11:29:30 blockdev_xnvme -- common/autotest_common.sh@1681 -- # lcov --version 00:18:49.200 11:29:30 blockdev_xnvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:49.200 11:29:30 blockdev_xnvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:49.200 11:29:30 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:49.200 11:29:30 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:49.200 11:29:30 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:49.200 11:29:30 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:18:49.200 11:29:30 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:18:49.200 11:29:30 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:18:49.200 11:29:30 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:18:49.200 11:29:30 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:18:49.200 11:29:30 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:18:49.200 11:29:30 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:18:49.200 11:29:30 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:49.200 11:29:30 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:18:49.200 11:29:30 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:18:49.200 11:29:30 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:49.200 11:29:30 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:49.200 11:29:30 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:18:49.200 11:29:30 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:18:49.200 11:29:30 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:49.200 11:29:30 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:18:49.200 11:29:30 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:18:49.200 11:29:30 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:18:49.200 11:29:30 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:18:49.200 11:29:30 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:49.200 11:29:30 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:18:49.200 11:29:30 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:18:49.200 11:29:30 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:49.200 11:29:30 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:49.200 11:29:30 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:18:49.200 11:29:30 blockdev_xnvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:49.200 11:29:30 blockdev_xnvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:49.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:49.200 --rc genhtml_branch_coverage=1 00:18:49.200 --rc genhtml_function_coverage=1 00:18:49.200 --rc genhtml_legend=1 00:18:49.200 --rc geninfo_all_blocks=1 00:18:49.200 --rc geninfo_unexecuted_blocks=1 00:18:49.200 00:18:49.200 ' 00:18:49.200 11:29:30 blockdev_xnvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:49.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:49.200 --rc genhtml_branch_coverage=1 00:18:49.200 --rc genhtml_function_coverage=1 00:18:49.200 --rc genhtml_legend=1 00:18:49.200 --rc geninfo_all_blocks=1 00:18:49.200 --rc geninfo_unexecuted_blocks=1 00:18:49.200 00:18:49.200 ' 00:18:49.200 11:29:30 blockdev_xnvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:49.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:49.200 --rc genhtml_branch_coverage=1 00:18:49.200 --rc genhtml_function_coverage=1 00:18:49.200 --rc genhtml_legend=1 00:18:49.200 --rc geninfo_all_blocks=1 00:18:49.200 --rc geninfo_unexecuted_blocks=1 00:18:49.200 00:18:49.200 ' 00:18:49.200 11:29:30 blockdev_xnvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:49.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:49.200 --rc genhtml_branch_coverage=1 00:18:49.200 --rc genhtml_function_coverage=1 00:18:49.200 --rc genhtml_legend=1 00:18:49.200 --rc geninfo_all_blocks=1 00:18:49.200 --rc geninfo_unexecuted_blocks=1 00:18:49.200 00:18:49.200 ' 00:18:49.200 11:29:30 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:18:49.200 11:29:30 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:18:49.200 11:29:30 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:18:49.200 11:29:30 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:49.201 11:29:30 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:18:49.201 11:29:30 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:18:49.201 11:29:30 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:18:49.201 11:29:30 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:18:49.201 11:29:30 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:18:49.201 11:29:30 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:18:49.201 11:29:30 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:18:49.201 11:29:30 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:18:49.201 11:29:30 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:18:49.201 11:29:30 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:18:49.201 11:29:30 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:18:49.201 11:29:30 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:18:49.201 11:29:30 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:18:49.201 11:29:30 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:18:49.201 11:29:30 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:18:49.201 11:29:30 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:18:49.201 11:29:30 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:18:49.201 11:29:30 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:18:49.201 11:29:30 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:18:49.201 11:29:30 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:18:49.201 11:29:30 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=71422 00:18:49.201 11:29:30 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:18:49.201 11:29:30 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 71422 00:18:49.201 11:29:30 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:18:49.201 11:29:30 blockdev_xnvme -- common/autotest_common.sh@831 -- # '[' -z 71422 ']' 00:18:49.201 11:29:30 blockdev_xnvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:49.201 11:29:30 blockdev_xnvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:49.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:49.201 11:29:30 blockdev_xnvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:49.201 11:29:30 blockdev_xnvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:49.201 11:29:30 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:49.459 [2024-10-07 11:29:31.021089] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:18:49.459 [2024-10-07 11:29:31.021397] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71422 ] 00:18:49.717 [2024-10-07 11:29:31.197962] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.717 [2024-10-07 11:29:31.424210] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:50.657 11:29:32 blockdev_xnvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:50.657 11:29:32 blockdev_xnvme -- common/autotest_common.sh@864 -- # return 0 00:18:50.657 11:29:32 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:18:50.657 11:29:32 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:18:50.657 11:29:32 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:18:50.657 11:29:32 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:18:50.657 11:29:32 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:18:51.224 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:51.482 Waiting for block devices as requested 00:18:51.482 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:18:51.739 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:18:51.739 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:18:51.997 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:18:57.281 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:18:57.281 11:29:38 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:18:57.281 11:29:38 blockdev_xnvme -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:18:57.281 11:29:38 blockdev_xnvme -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:18:57.281 11:29:38 blockdev_xnvme -- common/autotest_common.sh@1656 -- # local nvme bdf 00:18:57.281 11:29:38 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:18:57.281 11:29:38 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:18:57.281 11:29:38 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:18:57.281 11:29:38 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:18:57.281 11:29:38 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:18:57.281 11:29:38 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:18:57.281 11:29:38 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:18:57.281 11:29:38 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:18:57.281 11:29:38 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:18:57.281 11:29:38 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:18:57.281 11:29:38 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:18:57.281 11:29:38 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:18:57.281 11:29:38 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:18:57.281 11:29:38 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:18:57.281 11:29:38 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:18:57.281 11:29:38 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:18:57.281 11:29:38 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:18:57.281 11:29:38 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:18:57.281 11:29:38 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:18:57.281 11:29:38 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:18:57.281 11:29:38 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:18:57.281 11:29:38 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:18:57.281 11:29:38 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:18:57.281 11:29:38 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:18:57.281 11:29:38 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:18:57.281 11:29:38 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:18:57.281 11:29:38 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:18:57.281 11:29:38 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:18:57.281 11:29:38 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:18:57.281 11:29:38 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:18:57.281 11:29:38 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:18:57.281 11:29:38 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:18:57.281 11:29:38 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:18:57.281 11:29:38 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:18:57.281 11:29:38 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:18:57.281 11:29:38 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:18:57.281 11:29:38 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:18:57.281 11:29:38 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:18:57.281 11:29:38 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:18:57.281 11:29:38 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:18:57.281 11:29:38 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:18:57.281 11:29:38 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:18:57.281 11:29:38 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:18:57.281 11:29:38 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:18:57.281 11:29:38 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:18:57.281 11:29:38 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:18:57.281 11:29:38 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:18:57.281 11:29:38 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:18:57.282 11:29:38 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:18:57.282 11:29:38 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:18:57.282 11:29:38 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:18:57.282 11:29:38 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:18:57.282 11:29:38 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:18:57.282 11:29:38 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:18:57.282 11:29:38 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:18:57.282 11:29:38 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:18:57.282 11:29:38 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:18:57.282 11:29:38 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:18:57.282 11:29:38 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:18:57.282 11:29:38 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:18:57.282 11:29:38 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:18:57.282 11:29:38 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.282 11:29:38 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:57.282 11:29:38 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:18:57.282 nvme0n1 00:18:57.282 nvme1n1 00:18:57.282 nvme2n1 00:18:57.282 nvme2n2 00:18:57.282 nvme2n3 00:18:57.282 nvme3n1 00:18:57.282 11:29:38 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.282 11:29:38 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:18:57.282 11:29:38 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.282 11:29:38 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:57.282 11:29:38 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.282 11:29:38 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:18:57.282 11:29:38 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:18:57.282 11:29:38 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.282 11:29:38 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:57.282 11:29:38 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.282 11:29:38 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:18:57.282 11:29:38 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.282 11:29:38 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:57.282 11:29:38 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.282 11:29:38 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:18:57.282 11:29:38 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.282 11:29:38 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:57.282 11:29:38 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.282 11:29:38 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:18:57.282 11:29:38 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:18:57.282 11:29:38 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:18:57.282 11:29:38 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.282 11:29:38 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:57.282 11:29:38 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.282 11:29:38 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:18:57.282 11:29:38 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "72d489ad-1015-476c-aacd-75be29d8523b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "72d489ad-1015-476c-aacd-75be29d8523b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "b4625f12-9106-4dae-b060-6284c1c871d1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "b4625f12-9106-4dae-b060-6284c1c871d1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "e3e23f6d-8f0e-4da7-bb8e-02af83263e41"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e3e23f6d-8f0e-4da7-bb8e-02af83263e41",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "5edb9825-60a1-4ab3-93e0-53fd020d6b4b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5edb9825-60a1-4ab3-93e0-53fd020d6b4b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "cbb9a2dd-67bb-4ee7-be71-be7671f66734"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "cbb9a2dd-67bb-4ee7-be71-be7671f66734",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "d6d110ca-7cf3-4efb-b6ae-a3fb6819a04f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "d6d110ca-7cf3-4efb-b6ae-a3fb6819a04f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:18:57.282 11:29:38 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:18:57.282 11:29:38 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:18:57.282 11:29:38 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:18:57.282 11:29:38 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:18:57.282 11:29:38 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 71422 00:18:57.282 11:29:38 blockdev_xnvme -- common/autotest_common.sh@950 -- # '[' -z 71422 ']' 00:18:57.282 11:29:38 blockdev_xnvme -- common/autotest_common.sh@954 -- # kill -0 71422 00:18:57.282 11:29:38 blockdev_xnvme -- common/autotest_common.sh@955 -- # uname 00:18:57.282 11:29:38 blockdev_xnvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:57.282 11:29:38 blockdev_xnvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71422 00:18:57.282 11:29:38 blockdev_xnvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:57.282 11:29:38 blockdev_xnvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:57.282 11:29:38 blockdev_xnvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71422' 00:18:57.282 killing process with pid 71422 00:18:57.282 11:29:38 blockdev_xnvme -- common/autotest_common.sh@969 -- # kill 71422 00:18:57.282 11:29:38 blockdev_xnvme -- common/autotest_common.sh@974 -- # wait 71422 00:18:59.830 11:29:41 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:59.830 11:29:41 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:18:59.830 11:29:41 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:18:59.830 11:29:41 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:59.830 11:29:41 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:59.830 ************************************ 00:18:59.830 START TEST bdev_hello_world 00:18:59.830 ************************************ 00:18:59.830 11:29:41 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:19:00.088 [2024-10-07 11:29:41.610432] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:19:00.088 [2024-10-07 11:29:41.610768] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71804 ] 00:19:00.088 [2024-10-07 11:29:41.784536] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:00.346 [2024-10-07 11:29:42.016736] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:00.913 [2024-10-07 11:29:42.462537] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:19:00.913 [2024-10-07 11:29:42.462596] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:19:00.913 [2024-10-07 11:29:42.462623] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:19:00.913 [2024-10-07 11:29:42.464977] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:19:00.913 [2024-10-07 11:29:42.465250] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:19:00.913 [2024-10-07 11:29:42.465274] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:19:00.913 [2024-10-07 11:29:42.465493] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:19:00.913 00:19:00.913 [2024-10-07 11:29:42.465517] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:19:02.313 00:19:02.313 real 0m2.277s 00:19:02.313 user 0m1.903s 00:19:02.313 ************************************ 00:19:02.313 END TEST bdev_hello_world 00:19:02.313 ************************************ 00:19:02.313 sys 0m0.257s 00:19:02.313 11:29:43 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:02.313 11:29:43 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:19:02.313 11:29:43 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:19:02.313 11:29:43 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:02.313 11:29:43 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:02.313 11:29:43 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:02.313 ************************************ 00:19:02.313 START TEST bdev_bounds 00:19:02.313 ************************************ 00:19:02.313 11:29:43 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:19:02.313 11:29:43 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=71846 00:19:02.313 11:29:43 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:02.313 11:29:43 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:19:02.313 11:29:43 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 71846' 00:19:02.313 Process bdevio pid: 71846 00:19:02.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:02.313 11:29:43 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 71846 00:19:02.313 11:29:43 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 71846 ']' 00:19:02.313 11:29:43 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:02.313 11:29:43 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:02.313 11:29:43 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:02.313 11:29:43 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:02.313 11:29:43 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:02.313 [2024-10-07 11:29:43.945091] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:19:02.313 [2024-10-07 11:29:43.945226] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71846 ] 00:19:02.571 [2024-10-07 11:29:44.120687] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:02.829 [2024-10-07 11:29:44.351013] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:19:02.829 [2024-10-07 11:29:44.351078] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:02.829 [2024-10-07 11:29:44.351109] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:19:03.394 11:29:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:03.394 11:29:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:19:03.394 11:29:44 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:19:03.394 I/O targets: 00:19:03.394 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:19:03.394 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:19:03.394 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:19:03.394 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:19:03.394 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:19:03.394 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:19:03.394 00:19:03.394 00:19:03.394 CUnit - A unit testing framework for C - Version 2.1-3 00:19:03.394 http://cunit.sourceforge.net/ 00:19:03.394 00:19:03.394 00:19:03.394 Suite: bdevio tests on: nvme3n1 00:19:03.394 Test: blockdev write read block ...passed 00:19:03.394 Test: blockdev write zeroes read block ...passed 00:19:03.394 Test: blockdev write zeroes read no split ...passed 00:19:03.394 Test: blockdev write zeroes read split ...passed 00:19:03.394 Test: blockdev write zeroes read split partial ...passed 00:19:03.394 Test: blockdev reset ...passed 00:19:03.394 Test: blockdev write read 8 blocks ...passed 00:19:03.394 Test: blockdev write read size > 128k ...passed 00:19:03.394 Test: blockdev write read invalid size ...passed 00:19:03.394 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:03.394 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:03.394 Test: blockdev write read max offset ...passed 00:19:03.394 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:03.394 Test: blockdev writev readv 8 blocks ...passed 00:19:03.394 Test: blockdev writev readv 30 x 1block ...passed 00:19:03.394 Test: blockdev writev readv block ...passed 00:19:03.394 Test: blockdev writev readv size > 128k ...passed 00:19:03.394 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:03.394 Test: blockdev comparev and writev ...passed 00:19:03.394 Test: blockdev nvme passthru rw ...passed 00:19:03.394 Test: blockdev nvme passthru vendor specific ...passed 00:19:03.394 Test: blockdev nvme admin passthru ...passed 00:19:03.394 Test: blockdev copy ...passed 00:19:03.394 Suite: bdevio tests on: nvme2n3 00:19:03.394 Test: blockdev write read block ...passed 00:19:03.394 Test: blockdev write zeroes read block ...passed 00:19:03.394 Test: blockdev write zeroes read no split ...passed 00:19:03.394 Test: blockdev write zeroes read split ...passed 00:19:03.653 Test: blockdev write zeroes read split partial ...passed 00:19:03.653 Test: blockdev reset ...passed 00:19:03.653 Test: blockdev write read 8 blocks ...passed 00:19:03.653 Test: blockdev write read size > 128k ...passed 00:19:03.653 Test: blockdev write read invalid size ...passed 00:19:03.653 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:03.653 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:03.653 Test: blockdev write read max offset ...passed 00:19:03.653 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:03.653 Test: blockdev writev readv 8 blocks ...passed 00:19:03.653 Test: blockdev writev readv 30 x 1block ...passed 00:19:03.653 Test: blockdev writev readv block ...passed 00:19:03.653 Test: blockdev writev readv size > 128k ...passed 00:19:03.653 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:03.653 Test: blockdev comparev and writev ...passed 00:19:03.653 Test: blockdev nvme passthru rw ...passed 00:19:03.653 Test: blockdev nvme passthru vendor specific ...passed 00:19:03.653 Test: blockdev nvme admin passthru ...passed 00:19:03.653 Test: blockdev copy ...passed 00:19:03.653 Suite: bdevio tests on: nvme2n2 00:19:03.653 Test: blockdev write read block ...passed 00:19:03.653 Test: blockdev write zeroes read block ...passed 00:19:03.653 Test: blockdev write zeroes read no split ...passed 00:19:03.653 Test: blockdev write zeroes read split ...passed 00:19:03.653 Test: blockdev write zeroes read split partial ...passed 00:19:03.653 Test: blockdev reset ...passed 00:19:03.653 Test: blockdev write read 8 blocks ...passed 00:19:03.653 Test: blockdev write read size > 128k ...passed 00:19:03.653 Test: blockdev write read invalid size ...passed 00:19:03.653 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:03.653 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:03.653 Test: blockdev write read max offset ...passed 00:19:03.653 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:03.653 Test: blockdev writev readv 8 blocks ...passed 00:19:03.653 Test: blockdev writev readv 30 x 1block ...passed 00:19:03.653 Test: blockdev writev readv block ...passed 00:19:03.653 Test: blockdev writev readv size > 128k ...passed 00:19:03.653 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:03.653 Test: blockdev comparev and writev ...passed 00:19:03.653 Test: blockdev nvme passthru rw ...passed 00:19:03.653 Test: blockdev nvme passthru vendor specific ...passed 00:19:03.653 Test: blockdev nvme admin passthru ...passed 00:19:03.653 Test: blockdev copy ...passed 00:19:03.653 Suite: bdevio tests on: nvme2n1 00:19:03.653 Test: blockdev write read block ...passed 00:19:03.653 Test: blockdev write zeroes read block ...passed 00:19:03.653 Test: blockdev write zeroes read no split ...passed 00:19:03.653 Test: blockdev write zeroes read split ...passed 00:19:03.653 Test: blockdev write zeroes read split partial ...passed 00:19:03.653 Test: blockdev reset ...passed 00:19:03.653 Test: blockdev write read 8 blocks ...passed 00:19:03.653 Test: blockdev write read size > 128k ...passed 00:19:03.653 Test: blockdev write read invalid size ...passed 00:19:03.653 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:03.653 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:03.653 Test: blockdev write read max offset ...passed 00:19:03.653 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:03.653 Test: blockdev writev readv 8 blocks ...passed 00:19:03.653 Test: blockdev writev readv 30 x 1block ...passed 00:19:03.653 Test: blockdev writev readv block ...passed 00:19:03.653 Test: blockdev writev readv size > 128k ...passed 00:19:03.653 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:03.653 Test: blockdev comparev and writev ...passed 00:19:03.653 Test: blockdev nvme passthru rw ...passed 00:19:03.653 Test: blockdev nvme passthru vendor specific ...passed 00:19:03.653 Test: blockdev nvme admin passthru ...passed 00:19:03.653 Test: blockdev copy ...passed 00:19:03.653 Suite: bdevio tests on: nvme1n1 00:19:03.653 Test: blockdev write read block ...passed 00:19:03.653 Test: blockdev write zeroes read block ...passed 00:19:03.653 Test: blockdev write zeroes read no split ...passed 00:19:03.653 Test: blockdev write zeroes read split ...passed 00:19:03.911 Test: blockdev write zeroes read split partial ...passed 00:19:03.911 Test: blockdev reset ...passed 00:19:03.911 Test: blockdev write read 8 blocks ...passed 00:19:03.911 Test: blockdev write read size > 128k ...passed 00:19:03.911 Test: blockdev write read invalid size ...passed 00:19:03.911 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:03.911 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:03.911 Test: blockdev write read max offset ...passed 00:19:03.911 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:03.911 Test: blockdev writev readv 8 blocks ...passed 00:19:03.911 Test: blockdev writev readv 30 x 1block ...passed 00:19:03.911 Test: blockdev writev readv block ...passed 00:19:03.911 Test: blockdev writev readv size > 128k ...passed 00:19:03.911 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:03.911 Test: blockdev comparev and writev ...passed 00:19:03.911 Test: blockdev nvme passthru rw ...passed 00:19:03.911 Test: blockdev nvme passthru vendor specific ...passed 00:19:03.911 Test: blockdev nvme admin passthru ...passed 00:19:03.911 Test: blockdev copy ...passed 00:19:03.911 Suite: bdevio tests on: nvme0n1 00:19:03.911 Test: blockdev write read block ...passed 00:19:03.911 Test: blockdev write zeroes read block ...passed 00:19:03.911 Test: blockdev write zeroes read no split ...passed 00:19:03.911 Test: blockdev write zeroes read split ...passed 00:19:03.911 Test: blockdev write zeroes read split partial ...passed 00:19:03.911 Test: blockdev reset ...passed 00:19:03.911 Test: blockdev write read 8 blocks ...passed 00:19:03.911 Test: blockdev write read size > 128k ...passed 00:19:03.911 Test: blockdev write read invalid size ...passed 00:19:03.911 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:03.911 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:03.911 Test: blockdev write read max offset ...passed 00:19:03.911 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:03.911 Test: blockdev writev readv 8 blocks ...passed 00:19:03.911 Test: blockdev writev readv 30 x 1block ...passed 00:19:03.911 Test: blockdev writev readv block ...passed 00:19:03.911 Test: blockdev writev readv size > 128k ...passed 00:19:03.911 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:03.911 Test: blockdev comparev and writev ...passed 00:19:03.911 Test: blockdev nvme passthru rw ...passed 00:19:03.911 Test: blockdev nvme passthru vendor specific ...passed 00:19:03.911 Test: blockdev nvme admin passthru ...passed 00:19:03.911 Test: blockdev copy ...passed 00:19:03.911 00:19:03.911 Run Summary: Type Total Ran Passed Failed Inactive 00:19:03.911 suites 6 6 n/a 0 0 00:19:03.912 tests 138 138 138 0 0 00:19:03.912 asserts 780 780 780 0 n/a 00:19:03.912 00:19:03.912 Elapsed time = 1.483 seconds 00:19:03.912 0 00:19:03.912 11:29:45 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 71846 00:19:03.912 11:29:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 71846 ']' 00:19:03.912 11:29:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 71846 00:19:03.912 11:29:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:19:03.912 11:29:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:03.912 11:29:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71846 00:19:03.912 killing process with pid 71846 00:19:03.912 11:29:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:03.912 11:29:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:03.912 11:29:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71846' 00:19:03.912 11:29:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 71846 00:19:03.912 11:29:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 71846 00:19:05.302 11:29:46 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:19:05.302 00:19:05.302 real 0m3.048s 00:19:05.302 user 0m7.143s 00:19:05.302 sys 0m0.465s 00:19:05.302 11:29:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:05.302 11:29:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:05.302 ************************************ 00:19:05.302 END TEST bdev_bounds 00:19:05.302 ************************************ 00:19:05.302 11:29:46 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:19:05.302 11:29:46 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:19:05.302 11:29:46 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:05.302 11:29:46 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:05.302 ************************************ 00:19:05.302 START TEST bdev_nbd 00:19:05.302 ************************************ 00:19:05.302 11:29:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:19:05.302 11:29:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:19:05.302 11:29:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:19:05.302 11:29:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:05.302 11:29:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:05.302 11:29:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:19:05.302 11:29:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:19:05.302 11:29:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:19:05.302 11:29:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:19:05.302 11:29:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:19:05.302 11:29:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:19:05.302 11:29:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:19:05.302 11:29:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:05.302 11:29:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:19:05.302 11:29:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:19:05.302 11:29:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:19:05.302 11:29:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=71913 00:19:05.302 11:29:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:05.302 11:29:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:19:05.302 11:29:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 71913 /var/tmp/spdk-nbd.sock 00:19:05.302 11:29:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 71913 ']' 00:19:05.302 11:29:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:19:05.302 11:29:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:05.302 11:29:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:19:05.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:19:05.302 11:29:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:05.302 11:29:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:05.560 [2024-10-07 11:29:47.081083] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:19:05.560 [2024-10-07 11:29:47.081365] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:05.560 [2024-10-07 11:29:47.256691] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:05.818 [2024-10-07 11:29:47.470275] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:06.403 11:29:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:06.404 11:29:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:19:06.404 11:29:47 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:19:06.404 11:29:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:06.404 11:29:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:19:06.404 11:29:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:19:06.404 11:29:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:19:06.404 11:29:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:06.404 11:29:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:19:06.404 11:29:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:19:06.404 11:29:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:19:06.404 11:29:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:19:06.404 11:29:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:19:06.404 11:29:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:06.404 11:29:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:19:06.661 11:29:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:19:06.661 11:29:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:19:06.661 11:29:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:19:06.661 11:29:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:19:06.661 11:29:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:19:06.661 11:29:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:06.661 11:29:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:06.661 11:29:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:19:06.661 11:29:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:19:06.661 11:29:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:06.661 11:29:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:06.661 11:29:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:06.661 1+0 records in 00:19:06.661 1+0 records out 00:19:06.661 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000645716 s, 6.3 MB/s 00:19:06.661 11:29:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:06.661 11:29:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:19:06.661 11:29:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:06.661 11:29:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:06.661 11:29:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:19:06.661 11:29:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:06.661 11:29:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:06.661 11:29:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:19:06.919 11:29:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:19:06.919 11:29:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:19:06.919 11:29:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:19:06.919 11:29:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:19:06.919 11:29:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:19:06.919 11:29:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:06.919 11:29:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:06.919 11:29:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:19:06.919 11:29:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:19:06.919 11:29:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:06.919 11:29:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:06.919 11:29:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:06.919 1+0 records in 00:19:06.919 1+0 records out 00:19:06.919 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000689074 s, 5.9 MB/s 00:19:06.919 11:29:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:06.919 11:29:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:19:06.919 11:29:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:06.919 11:29:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:06.919 11:29:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:19:06.919 11:29:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:06.919 11:29:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:06.919 11:29:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:19:07.176 11:29:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:19:07.176 11:29:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:19:07.176 11:29:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:19:07.176 11:29:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:19:07.176 11:29:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:19:07.176 11:29:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:07.176 11:29:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:07.176 11:29:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:19:07.176 11:29:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:19:07.176 11:29:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:07.176 11:29:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:07.176 11:29:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:07.176 1+0 records in 00:19:07.176 1+0 records out 00:19:07.176 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000684949 s, 6.0 MB/s 00:19:07.176 11:29:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:07.176 11:29:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:19:07.176 11:29:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:07.176 11:29:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:07.176 11:29:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:19:07.176 11:29:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:07.176 11:29:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:07.176 11:29:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:19:07.434 11:29:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:19:07.434 11:29:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:19:07.434 11:29:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:19:07.434 11:29:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:19:07.434 11:29:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:19:07.434 11:29:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:07.434 11:29:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:07.434 11:29:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:19:07.434 11:29:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:19:07.434 11:29:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:07.434 11:29:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:07.434 11:29:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:07.434 1+0 records in 00:19:07.434 1+0 records out 00:19:07.434 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000721251 s, 5.7 MB/s 00:19:07.434 11:29:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:07.434 11:29:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:19:07.434 11:29:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:07.434 11:29:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:07.434 11:29:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:19:07.434 11:29:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:07.434 11:29:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:07.434 11:29:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:19:07.692 11:29:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:19:07.692 11:29:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:19:07.692 11:29:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:19:07.692 11:29:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:19:07.692 11:29:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:19:07.692 11:29:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:07.692 11:29:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:07.692 11:29:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:19:07.692 11:29:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:19:07.692 11:29:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:07.692 11:29:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:07.692 11:29:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:07.692 1+0 records in 00:19:07.692 1+0 records out 00:19:07.692 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000671784 s, 6.1 MB/s 00:19:07.692 11:29:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:07.692 11:29:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:19:07.692 11:29:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:07.692 11:29:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:07.692 11:29:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:19:07.692 11:29:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:07.692 11:29:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:07.692 11:29:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:19:07.949 11:29:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:19:07.949 11:29:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:19:07.949 11:29:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:19:07.949 11:29:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:19:07.950 11:29:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:19:07.950 11:29:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:07.950 11:29:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:07.950 11:29:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:19:07.950 11:29:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:19:07.950 11:29:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:07.950 11:29:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:07.950 11:29:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:08.206 1+0 records in 00:19:08.206 1+0 records out 00:19:08.206 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000797973 s, 5.1 MB/s 00:19:08.206 11:29:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:08.206 11:29:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:19:08.206 11:29:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:08.206 11:29:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:08.206 11:29:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:19:08.206 11:29:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:08.206 11:29:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:08.206 11:29:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:08.206 11:29:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:19:08.206 { 00:19:08.206 "nbd_device": "/dev/nbd0", 00:19:08.207 "bdev_name": "nvme0n1" 00:19:08.207 }, 00:19:08.207 { 00:19:08.207 "nbd_device": "/dev/nbd1", 00:19:08.207 "bdev_name": "nvme1n1" 00:19:08.207 }, 00:19:08.207 { 00:19:08.207 "nbd_device": "/dev/nbd2", 00:19:08.207 "bdev_name": "nvme2n1" 00:19:08.207 }, 00:19:08.207 { 00:19:08.207 "nbd_device": "/dev/nbd3", 00:19:08.207 "bdev_name": "nvme2n2" 00:19:08.207 }, 00:19:08.207 { 00:19:08.207 "nbd_device": "/dev/nbd4", 00:19:08.207 "bdev_name": "nvme2n3" 00:19:08.207 }, 00:19:08.207 { 00:19:08.207 "nbd_device": "/dev/nbd5", 00:19:08.207 "bdev_name": "nvme3n1" 00:19:08.207 } 00:19:08.207 ]' 00:19:08.207 11:29:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:19:08.207 11:29:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:19:08.207 { 00:19:08.207 "nbd_device": "/dev/nbd0", 00:19:08.207 "bdev_name": "nvme0n1" 00:19:08.207 }, 00:19:08.207 { 00:19:08.207 "nbd_device": "/dev/nbd1", 00:19:08.207 "bdev_name": "nvme1n1" 00:19:08.207 }, 00:19:08.207 { 00:19:08.207 "nbd_device": "/dev/nbd2", 00:19:08.207 "bdev_name": "nvme2n1" 00:19:08.207 }, 00:19:08.207 { 00:19:08.207 "nbd_device": "/dev/nbd3", 00:19:08.207 "bdev_name": "nvme2n2" 00:19:08.207 }, 00:19:08.207 { 00:19:08.207 "nbd_device": "/dev/nbd4", 00:19:08.207 "bdev_name": "nvme2n3" 00:19:08.207 }, 00:19:08.207 { 00:19:08.207 "nbd_device": "/dev/nbd5", 00:19:08.207 "bdev_name": "nvme3n1" 00:19:08.207 } 00:19:08.207 ]' 00:19:08.207 11:29:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:19:08.464 11:29:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:19:08.464 11:29:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:08.464 11:29:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:19:08.464 11:29:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:08.464 11:29:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:08.464 11:29:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:08.464 11:29:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:08.721 11:29:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:08.721 11:29:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:08.721 11:29:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:08.721 11:29:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:08.721 11:29:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:08.721 11:29:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:08.721 11:29:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:08.721 11:29:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:08.721 11:29:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:08.721 11:29:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:19:08.978 11:29:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:08.978 11:29:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:08.978 11:29:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:08.978 11:29:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:08.978 11:29:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:08.978 11:29:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:08.978 11:29:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:08.978 11:29:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:08.978 11:29:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:08.978 11:29:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:19:09.236 11:29:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:19:09.236 11:29:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:19:09.236 11:29:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:19:09.236 11:29:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:09.236 11:29:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:09.236 11:29:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:19:09.236 11:29:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:09.236 11:29:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:09.236 11:29:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:09.236 11:29:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:19:09.236 11:29:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:19:09.236 11:29:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:19:09.236 11:29:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:19:09.236 11:29:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:09.236 11:29:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:09.236 11:29:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:19:09.236 11:29:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:09.236 11:29:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:09.236 11:29:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:09.236 11:29:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:19:09.493 11:29:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:19:09.493 11:29:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:19:09.493 11:29:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:19:09.493 11:29:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:09.493 11:29:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:09.493 11:29:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:19:09.493 11:29:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:09.493 11:29:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:09.493 11:29:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:09.493 11:29:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:19:09.750 11:29:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:19:09.750 11:29:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:19:09.750 11:29:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:19:09.750 11:29:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:09.750 11:29:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:09.750 11:29:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:19:09.750 11:29:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:09.750 11:29:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:09.750 11:29:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:09.751 11:29:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:09.751 11:29:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:10.008 11:29:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:10.008 11:29:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:10.008 11:29:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:10.008 11:29:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:10.008 11:29:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:10.008 11:29:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:10.008 11:29:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:10.008 11:29:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:10.008 11:29:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:10.008 11:29:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:19:10.008 11:29:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:19:10.008 11:29:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:19:10.008 11:29:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:19:10.008 11:29:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:10.008 11:29:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:19:10.008 11:29:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:19:10.008 11:29:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:10.008 11:29:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:19:10.008 11:29:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:19:10.008 11:29:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:10.008 11:29:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:19:10.008 11:29:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:10.008 11:29:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:10.008 11:29:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:10.008 11:29:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:19:10.008 11:29:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:10.008 11:29:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:10.008 11:29:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:19:10.267 /dev/nbd0 00:19:10.267 11:29:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:10.267 11:29:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:10.267 11:29:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:19:10.267 11:29:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:19:10.267 11:29:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:10.267 11:29:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:10.267 11:29:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:19:10.267 11:29:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:19:10.267 11:29:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:10.267 11:29:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:10.267 11:29:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:10.267 1+0 records in 00:19:10.267 1+0 records out 00:19:10.267 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000686328 s, 6.0 MB/s 00:19:10.267 11:29:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:10.267 11:29:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:19:10.267 11:29:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:10.267 11:29:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:10.267 11:29:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:19:10.267 11:29:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:10.267 11:29:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:10.268 11:29:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:19:10.525 /dev/nbd1 00:19:10.525 11:29:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:10.525 11:29:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:10.525 11:29:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:19:10.525 11:29:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:19:10.525 11:29:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:10.525 11:29:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:10.525 11:29:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:19:10.525 11:29:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:19:10.525 11:29:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:10.525 11:29:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:10.525 11:29:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:10.525 1+0 records in 00:19:10.525 1+0 records out 00:19:10.525 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000622774 s, 6.6 MB/s 00:19:10.525 11:29:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:10.525 11:29:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:19:10.525 11:29:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:10.525 11:29:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:10.525 11:29:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:19:10.525 11:29:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:10.525 11:29:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:10.525 11:29:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:19:10.783 /dev/nbd10 00:19:10.783 11:29:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:19:10.783 11:29:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:19:10.783 11:29:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:19:10.783 11:29:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:19:10.783 11:29:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:10.783 11:29:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:10.783 11:29:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:19:10.784 11:29:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:19:10.784 11:29:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:10.784 11:29:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:10.784 11:29:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:10.784 1+0 records in 00:19:10.784 1+0 records out 00:19:10.784 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000730169 s, 5.6 MB/s 00:19:11.042 11:29:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:11.042 11:29:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:19:11.042 11:29:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:11.042 11:29:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:11.042 11:29:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:19:11.042 11:29:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:11.042 11:29:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:11.042 11:29:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:19:11.042 /dev/nbd11 00:19:11.042 11:29:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:19:11.043 11:29:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:19:11.043 11:29:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:19:11.043 11:29:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:19:11.043 11:29:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:11.043 11:29:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:11.043 11:29:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:19:11.043 11:29:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:19:11.043 11:29:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:11.043 11:29:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:11.043 11:29:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:11.043 1+0 records in 00:19:11.043 1+0 records out 00:19:11.043 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000601004 s, 6.8 MB/s 00:19:11.043 11:29:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:11.043 11:29:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:19:11.043 11:29:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:11.301 11:29:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:11.301 11:29:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:19:11.301 11:29:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:11.301 11:29:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:11.301 11:29:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:19:11.301 /dev/nbd12 00:19:11.301 11:29:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:19:11.301 11:29:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:19:11.301 11:29:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:19:11.301 11:29:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:19:11.301 11:29:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:11.301 11:29:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:11.301 11:29:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:19:11.559 11:29:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:19:11.559 11:29:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:11.559 11:29:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:11.559 11:29:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:11.559 1+0 records in 00:19:11.559 1+0 records out 00:19:11.559 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000501642 s, 8.2 MB/s 00:19:11.559 11:29:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:11.559 11:29:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:19:11.559 11:29:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:11.559 11:29:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:11.559 11:29:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:19:11.559 11:29:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:11.559 11:29:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:11.559 11:29:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:19:11.559 /dev/nbd13 00:19:11.559 11:29:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:19:11.818 11:29:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:19:11.818 11:29:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:19:11.818 11:29:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:19:11.818 11:29:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:11.818 11:29:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:11.818 11:29:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:19:11.818 11:29:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:19:11.818 11:29:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:11.818 11:29:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:11.818 11:29:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:11.818 1+0 records in 00:19:11.818 1+0 records out 00:19:11.818 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000684166 s, 6.0 MB/s 00:19:11.818 11:29:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:11.818 11:29:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:19:11.818 11:29:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:11.818 11:29:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:11.818 11:29:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:19:11.818 11:29:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:11.818 11:29:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:11.818 11:29:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:11.818 11:29:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:11.818 11:29:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:11.818 11:29:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:19:11.818 { 00:19:11.818 "nbd_device": "/dev/nbd0", 00:19:11.818 "bdev_name": "nvme0n1" 00:19:11.818 }, 00:19:11.818 { 00:19:11.818 "nbd_device": "/dev/nbd1", 00:19:11.818 "bdev_name": "nvme1n1" 00:19:11.818 }, 00:19:11.818 { 00:19:11.818 "nbd_device": "/dev/nbd10", 00:19:11.818 "bdev_name": "nvme2n1" 00:19:11.818 }, 00:19:11.818 { 00:19:11.818 "nbd_device": "/dev/nbd11", 00:19:11.818 "bdev_name": "nvme2n2" 00:19:11.818 }, 00:19:11.818 { 00:19:11.818 "nbd_device": "/dev/nbd12", 00:19:11.818 "bdev_name": "nvme2n3" 00:19:11.818 }, 00:19:11.818 { 00:19:11.818 "nbd_device": "/dev/nbd13", 00:19:11.818 "bdev_name": "nvme3n1" 00:19:11.818 } 00:19:11.818 ]' 00:19:11.818 11:29:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:19:11.818 { 00:19:11.818 "nbd_device": "/dev/nbd0", 00:19:11.818 "bdev_name": "nvme0n1" 00:19:11.818 }, 00:19:11.818 { 00:19:11.818 "nbd_device": "/dev/nbd1", 00:19:11.818 "bdev_name": "nvme1n1" 00:19:11.818 }, 00:19:11.818 { 00:19:11.818 "nbd_device": "/dev/nbd10", 00:19:11.818 "bdev_name": "nvme2n1" 00:19:11.818 }, 00:19:11.818 { 00:19:11.818 "nbd_device": "/dev/nbd11", 00:19:11.818 "bdev_name": "nvme2n2" 00:19:11.818 }, 00:19:11.818 { 00:19:11.818 "nbd_device": "/dev/nbd12", 00:19:11.818 "bdev_name": "nvme2n3" 00:19:11.818 }, 00:19:11.818 { 00:19:11.818 "nbd_device": "/dev/nbd13", 00:19:11.818 "bdev_name": "nvme3n1" 00:19:11.818 } 00:19:11.818 ]' 00:19:11.818 11:29:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:12.077 11:29:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:19:12.077 /dev/nbd1 00:19:12.077 /dev/nbd10 00:19:12.077 /dev/nbd11 00:19:12.077 /dev/nbd12 00:19:12.077 /dev/nbd13' 00:19:12.077 11:29:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:19:12.078 /dev/nbd1 00:19:12.078 /dev/nbd10 00:19:12.078 /dev/nbd11 00:19:12.078 /dev/nbd12 00:19:12.078 /dev/nbd13' 00:19:12.078 11:29:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:12.078 11:29:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:19:12.078 11:29:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:19:12.078 11:29:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:19:12.078 11:29:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:19:12.078 11:29:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:19:12.078 11:29:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:12.078 11:29:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:12.078 11:29:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:19:12.078 11:29:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:12.078 11:29:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:19:12.078 11:29:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:19:12.078 256+0 records in 00:19:12.078 256+0 records out 00:19:12.078 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00988951 s, 106 MB/s 00:19:12.078 11:29:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:12.078 11:29:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:19:12.078 256+0 records in 00:19:12.078 256+0 records out 00:19:12.078 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.117067 s, 9.0 MB/s 00:19:12.078 11:29:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:12.078 11:29:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:19:12.336 256+0 records in 00:19:12.336 256+0 records out 00:19:12.336 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.132872 s, 7.9 MB/s 00:19:12.336 11:29:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:12.336 11:29:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:19:12.336 256+0 records in 00:19:12.336 256+0 records out 00:19:12.336 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.124882 s, 8.4 MB/s 00:19:12.336 11:29:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:12.336 11:29:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:19:12.661 256+0 records in 00:19:12.661 256+0 records out 00:19:12.661 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.125462 s, 8.4 MB/s 00:19:12.661 11:29:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:12.661 11:29:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:19:12.661 256+0 records in 00:19:12.661 256+0 records out 00:19:12.661 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.126131 s, 8.3 MB/s 00:19:12.661 11:29:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:12.661 11:29:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:19:12.927 256+0 records in 00:19:12.927 256+0 records out 00:19:12.927 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.1311 s, 8.0 MB/s 00:19:12.927 11:29:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:19:12.927 11:29:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:12.927 11:29:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:12.927 11:29:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:19:12.927 11:29:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:12.927 11:29:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:19:12.927 11:29:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:19:12.927 11:29:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:12.927 11:29:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:19:12.927 11:29:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:12.927 11:29:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:19:12.927 11:29:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:12.927 11:29:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:19:12.927 11:29:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:12.927 11:29:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:19:12.927 11:29:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:12.927 11:29:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:19:12.927 11:29:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:12.927 11:29:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:19:12.927 11:29:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:12.928 11:29:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:19:12.928 11:29:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:12.928 11:29:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:12.928 11:29:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:12.928 11:29:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:12.928 11:29:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:12.928 11:29:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:13.188 11:29:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:13.188 11:29:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:13.188 11:29:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:13.188 11:29:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:13.188 11:29:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:13.188 11:29:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:13.188 11:29:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:13.188 11:29:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:13.188 11:29:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:13.188 11:29:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:19:13.445 11:29:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:13.445 11:29:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:13.445 11:29:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:13.445 11:29:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:13.445 11:29:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:13.445 11:29:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:13.445 11:29:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:13.445 11:29:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:13.445 11:29:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:13.445 11:29:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:19:13.703 11:29:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:19:13.703 11:29:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:19:13.703 11:29:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:19:13.703 11:29:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:13.703 11:29:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:13.703 11:29:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:19:13.703 11:29:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:13.703 11:29:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:13.703 11:29:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:13.703 11:29:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:19:13.703 11:29:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:19:13.703 11:29:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:19:13.703 11:29:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:19:13.961 11:29:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:13.961 11:29:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:13.961 11:29:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:19:13.961 11:29:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:13.961 11:29:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:13.961 11:29:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:13.961 11:29:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:19:13.961 11:29:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:19:13.961 11:29:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:19:13.961 11:29:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:19:13.961 11:29:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:13.961 11:29:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:13.961 11:29:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:19:13.961 11:29:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:13.961 11:29:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:13.961 11:29:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:13.961 11:29:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:19:14.219 11:29:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:19:14.219 11:29:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:19:14.219 11:29:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:19:14.219 11:29:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:14.219 11:29:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:14.219 11:29:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:19:14.219 11:29:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:14.219 11:29:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:14.219 11:29:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:14.219 11:29:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:14.219 11:29:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:14.476 11:29:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:14.476 11:29:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:14.476 11:29:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:14.476 11:29:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:14.476 11:29:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:14.476 11:29:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:14.476 11:29:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:14.476 11:29:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:14.476 11:29:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:14.476 11:29:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:19:14.476 11:29:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:19:14.476 11:29:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:19:14.476 11:29:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:14.476 11:29:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:14.476 11:29:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:19:14.476 11:29:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:19:14.733 malloc_lvol_verify 00:19:14.733 11:29:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:19:14.990 4134ca7b-dbca-4e34-8b87-8eb115b1911f 00:19:14.990 11:29:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:19:15.247 baa274a7-71b3-49f5-bf78-867a7765db49 00:19:15.247 11:29:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:19:15.505 /dev/nbd0 00:19:15.505 11:29:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:19:15.505 11:29:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:19:15.505 11:29:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:19:15.505 11:29:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:19:15.505 11:29:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:19:15.505 mke2fs 1.47.0 (5-Feb-2023) 00:19:15.505 Discarding device blocks: 0/4096 done 00:19:15.505 Creating filesystem with 4096 1k blocks and 1024 inodes 00:19:15.505 00:19:15.505 Allocating group tables: 0/1 done 00:19:15.505 Writing inode tables: 0/1 done 00:19:15.505 Creating journal (1024 blocks): done 00:19:15.505 Writing superblocks and filesystem accounting information: 0/1 done 00:19:15.505 00:19:15.505 11:29:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:15.505 11:29:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:15.505 11:29:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:15.505 11:29:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:15.505 11:29:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:15.505 11:29:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:15.505 11:29:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:15.764 11:29:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:15.764 11:29:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:15.764 11:29:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:15.764 11:29:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:15.764 11:29:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:15.764 11:29:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:15.764 11:29:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:15.764 11:29:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:15.764 11:29:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 71913 00:19:15.764 11:29:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 71913 ']' 00:19:15.764 11:29:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 71913 00:19:15.764 11:29:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:19:15.764 11:29:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:15.764 11:29:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71913 00:19:15.764 killing process with pid 71913 00:19:15.764 11:29:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:15.764 11:29:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:15.764 11:29:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71913' 00:19:15.764 11:29:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 71913 00:19:15.764 11:29:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 71913 00:19:17.667 ************************************ 00:19:17.667 END TEST bdev_nbd 00:19:17.667 ************************************ 00:19:17.667 11:29:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:19:17.667 00:19:17.667 real 0m11.946s 00:19:17.667 user 0m15.483s 00:19:17.667 sys 0m4.841s 00:19:17.667 11:29:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:17.667 11:29:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:17.667 11:29:58 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:19:17.667 11:29:58 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:19:17.667 11:29:58 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:19:17.667 11:29:58 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:19:17.667 11:29:58 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:17.667 11:29:58 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:17.667 11:29:58 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:17.667 ************************************ 00:19:17.667 START TEST bdev_fio 00:19:17.667 ************************************ 00:19:17.667 11:29:58 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:19:17.667 11:29:58 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:19:17.667 11:29:58 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:19:17.667 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:19:17.667 11:29:58 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:19:17.667 11:29:58 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:19:17.667 11:29:58 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:19:17.667 11:29:58 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:19:17.667 11:29:58 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:19:17.667 11:29:58 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:17.667 11:29:58 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:19:17.667 11:29:58 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:19:17.667 11:29:58 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:19:17.667 11:29:58 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:19:17.667 11:29:58 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:17.667 11:29:58 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:19:17.667 11:29:58 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:19:17.667 11:29:58 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:17.667 11:29:58 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:19:17.667 11:29:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:19:17.667 11:29:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:19:17.667 11:29:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:19:17.667 11:29:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:19:17.667 11:29:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:19:17.667 11:29:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:19:17.667 11:29:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:17.667 11:29:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:19:17.667 11:29:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:19:17.667 11:29:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:17.667 11:29:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:19:17.667 11:29:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:19:17.667 11:29:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:17.667 11:29:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:19:17.667 11:29:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:19:17.667 11:29:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:17.667 11:29:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:19:17.667 11:29:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:19:17.667 11:29:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:17.667 11:29:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:19:17.667 11:29:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:19:17.667 11:29:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:17.667 11:29:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:19:17.667 11:29:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:19:17.667 11:29:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:19:17.668 11:29:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:17.668 11:29:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:19:17.668 11:29:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:17.668 11:29:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:17.668 ************************************ 00:19:17.668 START TEST bdev_fio_rw_verify 00:19:17.668 ************************************ 00:19:17.668 11:29:59 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:17.668 11:29:59 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:17.668 11:29:59 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:19:17.668 11:29:59 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:17.668 11:29:59 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:19:17.668 11:29:59 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:17.668 11:29:59 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:19:17.668 11:29:59 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:19:17.668 11:29:59 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:17.668 11:29:59 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:17.668 11:29:59 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:19:17.668 11:29:59 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:17.668 11:29:59 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:17.668 11:29:59 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:17.668 11:29:59 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:19:17.668 11:29:59 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:17.668 11:29:59 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:17.668 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:17.668 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:17.668 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:17.668 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:17.668 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:17.668 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:17.668 fio-3.35 00:19:17.668 Starting 6 threads 00:19:29.886 00:19:29.886 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=72336: Mon Oct 7 11:30:10 2024 00:19:29.886 read: IOPS=34.2k, BW=134MiB/s (140MB/s)(1336MiB/10003msec) 00:19:29.886 slat (usec): min=2, max=1166, avg= 5.95, stdev= 5.27 00:19:29.886 clat (usec): min=77, max=4639, avg=523.81, stdev=219.65 00:19:29.886 lat (usec): min=80, max=4651, avg=529.77, stdev=220.59 00:19:29.886 clat percentiles (usec): 00:19:29.886 | 50.000th=[ 519], 99.000th=[ 1156], 99.900th=[ 1860], 99.990th=[ 3818], 00:19:29.886 | 99.999th=[ 4555] 00:19:29.886 write: IOPS=34.5k, BW=135MiB/s (141MB/s)(1349MiB/10003msec); 0 zone resets 00:19:29.886 slat (usec): min=8, max=2091, avg=23.79, stdev=33.19 00:19:29.886 clat (usec): min=80, max=9236, avg=640.61, stdev=348.46 00:19:29.886 lat (usec): min=95, max=9255, avg=664.41, stdev=352.92 00:19:29.886 clat percentiles (usec): 00:19:29.886 | 50.000th=[ 594], 99.000th=[ 1844], 99.900th=[ 4113], 99.990th=[ 5932], 00:19:29.886 | 99.999th=[ 9241] 00:19:29.886 bw ( KiB/s): min=109662, max=164400, per=99.63%, avg=137602.37, stdev=2347.77, samples=114 00:19:29.886 iops : min=27414, max=41100, avg=34399.89, stdev=586.98, samples=114 00:19:29.886 lat (usec) : 100=0.01%, 250=6.70%, 500=32.35%, 750=43.76%, 1000=11.92% 00:19:29.886 lat (msec) : 2=4.81%, 4=0.39%, 10=0.06% 00:19:29.886 cpu : usr=54.21%, sys=30.90%, ctx=6668, majf=0, minf=28268 00:19:29.886 IO depths : 1=11.7%, 2=24.1%, 4=50.8%, 8=13.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:29.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.886 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.886 issued rwts: total=342072,345389,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:29.886 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:29.886 00:19:29.886 Run status group 0 (all jobs): 00:19:29.886 READ: bw=134MiB/s (140MB/s), 134MiB/s-134MiB/s (140MB/s-140MB/s), io=1336MiB (1401MB), run=10003-10003msec 00:19:29.886 WRITE: bw=135MiB/s (141MB/s), 135MiB/s-135MiB/s (141MB/s-141MB/s), io=1349MiB (1415MB), run=10003-10003msec 00:19:30.145 ----------------------------------------------------- 00:19:30.145 Suppressions used: 00:19:30.145 count bytes template 00:19:30.145 6 48 /usr/src/fio/parse.c 00:19:30.145 3065 294240 /usr/src/fio/iolog.c 00:19:30.145 1 8 libtcmalloc_minimal.so 00:19:30.145 1 904 libcrypto.so 00:19:30.145 ----------------------------------------------------- 00:19:30.145 00:19:30.145 00:19:30.145 real 0m12.642s 00:19:30.145 user 0m34.564s 00:19:30.145 sys 0m19.055s 00:19:30.145 11:30:11 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:30.145 11:30:11 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:19:30.145 ************************************ 00:19:30.145 END TEST bdev_fio_rw_verify 00:19:30.145 ************************************ 00:19:30.145 11:30:11 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:19:30.145 11:30:11 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:30.145 11:30:11 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:19:30.145 11:30:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:30.145 11:30:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:19:30.145 11:30:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:19:30.145 11:30:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:19:30.145 11:30:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:19:30.145 11:30:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:30.145 11:30:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:19:30.145 11:30:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:19:30.145 11:30:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:30.145 11:30:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:19:30.145 11:30:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:19:30.145 11:30:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:19:30.145 11:30:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:19:30.145 11:30:11 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:19:30.145 11:30:11 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "72d489ad-1015-476c-aacd-75be29d8523b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "72d489ad-1015-476c-aacd-75be29d8523b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "b4625f12-9106-4dae-b060-6284c1c871d1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "b4625f12-9106-4dae-b060-6284c1c871d1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "e3e23f6d-8f0e-4da7-bb8e-02af83263e41"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e3e23f6d-8f0e-4da7-bb8e-02af83263e41",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "5edb9825-60a1-4ab3-93e0-53fd020d6b4b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5edb9825-60a1-4ab3-93e0-53fd020d6b4b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "cbb9a2dd-67bb-4ee7-be71-be7671f66734"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "cbb9a2dd-67bb-4ee7-be71-be7671f66734",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "d6d110ca-7cf3-4efb-b6ae-a3fb6819a04f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "d6d110ca-7cf3-4efb-b6ae-a3fb6819a04f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:19:30.145 11:30:11 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:19:30.145 11:30:11 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:30.404 11:30:11 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:19:30.404 /home/vagrant/spdk_repo/spdk 00:19:30.404 11:30:11 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:19:30.404 11:30:11 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:19:30.404 00:19:30.404 real 0m12.874s 00:19:30.404 user 0m34.684s 00:19:30.404 sys 0m19.175s 00:19:30.404 11:30:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:30.404 11:30:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:30.404 ************************************ 00:19:30.404 END TEST bdev_fio 00:19:30.404 ************************************ 00:19:30.404 11:30:11 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:30.404 11:30:11 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:30.404 11:30:11 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:19:30.404 11:30:11 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:30.404 11:30:11 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:30.404 ************************************ 00:19:30.404 START TEST bdev_verify 00:19:30.404 ************************************ 00:19:30.405 11:30:11 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:30.405 [2024-10-07 11:30:12.034962] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:19:30.405 [2024-10-07 11:30:12.035141] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72514 ] 00:19:30.663 [2024-10-07 11:30:12.223586] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:30.920 [2024-10-07 11:30:12.435641] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:30.920 [2024-10-07 11:30:12.435675] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:19:31.485 Running I/O for 5 seconds... 00:19:33.809 25504.00 IOPS, 99.62 MiB/s [2024-10-07T11:30:16.460Z] 26096.00 IOPS, 101.94 MiB/s [2024-10-07T11:30:17.395Z] 25824.00 IOPS, 100.88 MiB/s [2024-10-07T11:30:18.329Z] 25480.00 IOPS, 99.53 MiB/s [2024-10-07T11:30:18.330Z] 25312.00 IOPS, 98.88 MiB/s 00:19:36.619 Latency(us) 00:19:36.619 [2024-10-07T11:30:18.330Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:36.619 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:36.619 Verification LBA range: start 0x0 length 0xa0000 00:19:36.619 nvme0n1 : 5.06 1921.75 7.51 0.00 0.00 66494.38 12212.33 65272.80 00:19:36.619 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:36.619 Verification LBA range: start 0xa0000 length 0xa0000 00:19:36.619 nvme0n1 : 5.07 1895.30 7.40 0.00 0.00 67426.05 9422.44 61903.88 00:19:36.619 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:36.619 Verification LBA range: start 0x0 length 0xbd0bd 00:19:36.619 nvme1n1 : 5.05 3060.40 11.95 0.00 0.00 41656.58 5948.25 58956.08 00:19:36.619 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:36.619 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:19:36.619 nvme1n1 : 5.07 2920.12 11.41 0.00 0.00 43611.47 6106.17 55166.05 00:19:36.619 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:36.619 Verification LBA range: start 0x0 length 0x80000 00:19:36.619 nvme2n1 : 5.06 1922.76 7.51 0.00 0.00 66283.43 14528.46 55587.16 00:19:36.619 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:36.619 Verification LBA range: start 0x80000 length 0x80000 00:19:36.619 nvme2n1 : 5.08 1916.23 7.49 0.00 0.00 66275.60 8843.41 61903.88 00:19:36.619 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:36.619 Verification LBA range: start 0x0 length 0x80000 00:19:36.619 nvme2n2 : 5.07 1920.17 7.50 0.00 0.00 66201.77 12107.05 53902.70 00:19:36.619 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:36.619 Verification LBA range: start 0x80000 length 0x80000 00:19:36.619 nvme2n2 : 5.07 1893.73 7.40 0.00 0.00 66923.95 12001.77 52218.24 00:19:36.619 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:36.619 Verification LBA range: start 0x0 length 0x80000 00:19:36.619 nvme2n3 : 5.07 1919.72 7.50 0.00 0.00 66106.49 10791.07 59377.20 00:19:36.619 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:36.619 Verification LBA range: start 0x80000 length 0x80000 00:19:36.619 nvme2n3 : 5.07 1892.98 7.39 0.00 0.00 66876.17 10843.71 56850.51 00:19:36.619 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:36.619 Verification LBA range: start 0x0 length 0x20000 00:19:36.619 nvme3n1 : 5.07 1919.15 7.50 0.00 0.00 66021.46 4605.94 61903.88 00:19:36.619 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:36.619 Verification LBA range: start 0x20000 length 0x20000 00:19:36.619 nvme3n1 : 5.07 1892.45 7.39 0.00 0.00 66858.91 7527.43 63588.34 00:19:36.619 [2024-10-07T11:30:18.330Z] =================================================================================================================== 00:19:36.619 [2024-10-07T11:30:18.330Z] Total : 25074.74 97.95 0.00 0.00 60841.87 4605.94 65272.80 00:19:37.992 00:19:37.992 real 0m7.441s 00:19:37.992 user 0m11.096s 00:19:37.992 sys 0m2.136s 00:19:37.992 11:30:19 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:37.992 11:30:19 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:19:37.992 ************************************ 00:19:37.992 END TEST bdev_verify 00:19:37.992 ************************************ 00:19:37.992 11:30:19 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:37.992 11:30:19 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:19:37.992 11:30:19 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:37.992 11:30:19 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:37.992 ************************************ 00:19:37.992 START TEST bdev_verify_big_io 00:19:37.992 ************************************ 00:19:37.992 11:30:19 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:37.992 [2024-10-07 11:30:19.525829] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:19:37.992 [2024-10-07 11:30:19.525952] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72625 ] 00:19:37.992 [2024-10-07 11:30:19.697495] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:38.251 [2024-10-07 11:30:19.912632] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:38.251 [2024-10-07 11:30:19.912665] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:19:38.817 Running I/O for 5 seconds... 00:19:42.752 928.00 IOPS, 58.00 MiB/s [2024-10-07T11:30:26.364Z] 2368.00 IOPS, 148.00 MiB/s [2024-10-07T11:30:26.365Z] 3179.00 IOPS, 198.69 MiB/s 00:19:44.654 Latency(us) 00:19:44.654 [2024-10-07T11:30:26.365Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:44.654 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:44.654 Verification LBA range: start 0x0 length 0xa000 00:19:44.654 nvme0n1 : 5.70 134.63 8.41 0.00 0.00 924693.66 115385.47 1886594.57 00:19:44.654 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:44.654 Verification LBA range: start 0xa000 length 0xa000 00:19:44.654 nvme0n1 : 5.69 171.58 10.72 0.00 0.00 707598.31 94750.84 1024151.34 00:19:44.654 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:44.654 Verification LBA range: start 0x0 length 0xbd0b 00:19:44.654 nvme1n1 : 5.68 157.81 9.86 0.00 0.00 769828.69 12317.61 1873118.89 00:19:44.654 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:44.654 Verification LBA range: start 0xbd0b length 0xbd0b 00:19:44.654 nvme1n1 : 5.70 162.86 10.18 0.00 0.00 737334.84 17581.55 1408208.09 00:19:44.654 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:44.654 Verification LBA range: start 0x0 length 0x8000 00:19:44.654 nvme2n1 : 5.69 202.31 12.64 0.00 0.00 585506.48 123807.77 640094.59 00:19:44.654 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:44.654 Verification LBA range: start 0x8000 length 0x8000 00:19:44.654 nvme2n1 : 5.58 183.38 11.46 0.00 0.00 646137.63 76221.79 923083.77 00:19:44.654 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:44.654 Verification LBA range: start 0x0 length 0x8000 00:19:44.654 nvme2n2 : 5.70 202.12 12.63 0.00 0.00 581233.50 4237.47 758006.75 00:19:44.654 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:44.654 Verification LBA range: start 0x8000 length 0x8000 00:19:44.654 nvme2n2 : 5.69 163.05 10.19 0.00 0.00 704709.62 33899.75 1455372.95 00:19:44.654 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:44.654 Verification LBA range: start 0x0 length 0x8000 00:19:44.654 nvme2n3 : 5.71 173.79 10.86 0.00 0.00 658995.72 8264.38 1509275.66 00:19:44.654 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:44.654 Verification LBA range: start 0x8000 length 0x8000 00:19:44.654 nvme2n3 : 5.71 187.68 11.73 0.00 0.00 611000.70 3842.67 1078054.04 00:19:44.654 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:44.654 Verification LBA range: start 0x0 length 0x2000 00:19:44.654 nvme3n1 : 5.70 151.50 9.47 0.00 0.00 738393.90 8211.74 1394732.41 00:19:44.654 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:44.654 Verification LBA range: start 0x2000 length 0x2000 00:19:44.654 nvme3n1 : 5.71 221.22 13.83 0.00 0.00 505388.18 5211.30 1118481.07 00:19:44.654 [2024-10-07T11:30:26.365Z] =================================================================================================================== 00:19:44.654 [2024-10-07T11:30:26.365Z] Total : 2111.92 131.99 0.00 0.00 667448.67 3842.67 1886594.57 00:19:46.557 00:19:46.557 real 0m8.376s 00:19:46.557 user 0m14.837s 00:19:46.557 sys 0m0.681s 00:19:46.557 11:30:27 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:46.557 ************************************ 00:19:46.557 END TEST bdev_verify_big_io 00:19:46.557 ************************************ 00:19:46.557 11:30:27 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:19:46.557 11:30:27 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:46.557 11:30:27 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:19:46.557 11:30:27 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:46.557 11:30:27 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:46.557 ************************************ 00:19:46.557 START TEST bdev_write_zeroes 00:19:46.557 ************************************ 00:19:46.557 11:30:27 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:46.557 [2024-10-07 11:30:27.978629] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:19:46.557 [2024-10-07 11:30:27.978773] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72738 ] 00:19:46.557 [2024-10-07 11:30:28.151906] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:46.817 [2024-10-07 11:30:28.372885] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:47.385 Running I/O for 1 seconds... 00:19:48.321 71648.00 IOPS, 279.88 MiB/s 00:19:48.322 Latency(us) 00:19:48.322 [2024-10-07T11:30:30.033Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:48.322 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:48.322 nvme0n1 : 1.03 10524.53 41.11 0.00 0.00 12151.60 6843.12 28004.14 00:19:48.322 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:48.322 nvme1n1 : 1.03 18063.58 70.56 0.00 0.00 7058.78 3395.24 27161.91 00:19:48.322 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:48.322 nvme2n1 : 1.02 10547.58 41.20 0.00 0.00 12066.25 6790.48 29056.93 00:19:48.322 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:48.322 nvme2n2 : 1.02 10507.96 41.05 0.00 0.00 12103.72 6843.12 28846.37 00:19:48.322 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:48.322 nvme2n3 : 1.04 10495.65 41.00 0.00 0.00 12113.34 6895.76 29478.04 00:19:48.322 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:48.322 nvme3n1 : 1.04 10486.09 40.96 0.00 0.00 12118.09 6579.92 31583.61 00:19:48.322 [2024-10-07T11:30:30.033Z] =================================================================================================================== 00:19:48.322 [2024-10-07T11:30:30.033Z] Total : 70625.39 275.88 0.00 0.00 10814.44 3395.24 31583.61 00:19:49.698 00:19:49.698 real 0m3.297s 00:19:49.698 user 0m2.396s 00:19:49.698 sys 0m0.728s 00:19:49.698 11:30:31 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:49.698 11:30:31 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:19:49.698 ************************************ 00:19:49.698 END TEST bdev_write_zeroes 00:19:49.698 ************************************ 00:19:49.698 11:30:31 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:49.698 11:30:31 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:19:49.698 11:30:31 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:49.698 11:30:31 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:49.698 ************************************ 00:19:49.698 START TEST bdev_json_nonenclosed 00:19:49.698 ************************************ 00:19:49.698 11:30:31 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:49.698 [2024-10-07 11:30:31.342617] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:19:49.698 [2024-10-07 11:30:31.342751] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72797 ] 00:19:49.957 [2024-10-07 11:30:31.516179] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.216 [2024-10-07 11:30:31.743399] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:50.216 [2024-10-07 11:30:31.743498] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:19:50.216 [2024-10-07 11:30:31.743521] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:50.216 [2024-10-07 11:30:31.743534] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:50.474 00:19:50.474 real 0m0.922s 00:19:50.474 user 0m0.657s 00:19:50.474 sys 0m0.159s 00:19:50.474 11:30:32 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:50.474 11:30:32 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:19:50.474 ************************************ 00:19:50.474 END TEST bdev_json_nonenclosed 00:19:50.474 ************************************ 00:19:50.733 11:30:32 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:50.733 11:30:32 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:19:50.733 11:30:32 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:50.733 11:30:32 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:50.733 ************************************ 00:19:50.733 START TEST bdev_json_nonarray 00:19:50.733 ************************************ 00:19:50.733 11:30:32 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:50.733 [2024-10-07 11:30:32.331371] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:19:50.733 [2024-10-07 11:30:32.331509] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72828 ] 00:19:50.992 [2024-10-07 11:30:32.502410] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:51.251 [2024-10-07 11:30:32.720703] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:51.251 [2024-10-07 11:30:32.720822] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:19:51.251 [2024-10-07 11:30:32.720846] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:51.251 [2024-10-07 11:30:32.720859] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:51.510 00:19:51.510 real 0m0.912s 00:19:51.510 user 0m0.645s 00:19:51.510 sys 0m0.161s 00:19:51.510 11:30:33 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:51.510 ************************************ 00:19:51.510 END TEST bdev_json_nonarray 00:19:51.510 ************************************ 00:19:51.510 11:30:33 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:19:51.510 11:30:33 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:19:51.510 11:30:33 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:19:51.510 11:30:33 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:19:51.510 11:30:33 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:19:51.510 11:30:33 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:19:51.510 11:30:33 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:19:51.510 11:30:33 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:51.510 11:30:33 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:19:51.510 11:30:33 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:19:51.510 11:30:33 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:19:51.510 11:30:33 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:19:51.510 11:30:33 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:52.444 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:57.797 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:19:57.797 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:19:57.797 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:19:57.797 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:19:57.797 00:19:57.797 real 1m8.695s 00:19:57.797 user 1m40.900s 00:19:57.797 sys 0m38.315s 00:19:57.797 11:30:39 blockdev_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:57.797 11:30:39 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:57.797 ************************************ 00:19:57.797 END TEST blockdev_xnvme 00:19:57.797 ************************************ 00:19:57.797 11:30:39 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:19:57.797 11:30:39 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:57.797 11:30:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:57.797 11:30:39 -- common/autotest_common.sh@10 -- # set +x 00:19:57.797 ************************************ 00:19:57.797 START TEST ublk 00:19:57.797 ************************************ 00:19:57.797 11:30:39 ublk -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:19:58.056 * Looking for test storage... 00:19:58.056 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:19:58.056 11:30:39 ublk -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:58.056 11:30:39 ublk -- common/autotest_common.sh@1681 -- # lcov --version 00:19:58.056 11:30:39 ublk -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:58.056 11:30:39 ublk -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:58.056 11:30:39 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:58.056 11:30:39 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:58.056 11:30:39 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:58.056 11:30:39 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:19:58.056 11:30:39 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:19:58.056 11:30:39 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:19:58.056 11:30:39 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:19:58.056 11:30:39 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:19:58.056 11:30:39 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:19:58.056 11:30:39 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:19:58.056 11:30:39 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:58.056 11:30:39 ublk -- scripts/common.sh@344 -- # case "$op" in 00:19:58.056 11:30:39 ublk -- scripts/common.sh@345 -- # : 1 00:19:58.056 11:30:39 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:58.056 11:30:39 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:58.056 11:30:39 ublk -- scripts/common.sh@365 -- # decimal 1 00:19:58.056 11:30:39 ublk -- scripts/common.sh@353 -- # local d=1 00:19:58.056 11:30:39 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:58.056 11:30:39 ublk -- scripts/common.sh@355 -- # echo 1 00:19:58.056 11:30:39 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:19:58.056 11:30:39 ublk -- scripts/common.sh@366 -- # decimal 2 00:19:58.056 11:30:39 ublk -- scripts/common.sh@353 -- # local d=2 00:19:58.056 11:30:39 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:58.056 11:30:39 ublk -- scripts/common.sh@355 -- # echo 2 00:19:58.056 11:30:39 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:19:58.056 11:30:39 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:58.056 11:30:39 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:58.056 11:30:39 ublk -- scripts/common.sh@368 -- # return 0 00:19:58.056 11:30:39 ublk -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:58.056 11:30:39 ublk -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:58.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:58.056 --rc genhtml_branch_coverage=1 00:19:58.056 --rc genhtml_function_coverage=1 00:19:58.056 --rc genhtml_legend=1 00:19:58.056 --rc geninfo_all_blocks=1 00:19:58.056 --rc geninfo_unexecuted_blocks=1 00:19:58.056 00:19:58.056 ' 00:19:58.056 11:30:39 ublk -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:58.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:58.056 --rc genhtml_branch_coverage=1 00:19:58.056 --rc genhtml_function_coverage=1 00:19:58.056 --rc genhtml_legend=1 00:19:58.056 --rc geninfo_all_blocks=1 00:19:58.056 --rc geninfo_unexecuted_blocks=1 00:19:58.056 00:19:58.056 ' 00:19:58.056 11:30:39 ublk -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:58.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:58.056 --rc genhtml_branch_coverage=1 00:19:58.056 --rc genhtml_function_coverage=1 00:19:58.056 --rc genhtml_legend=1 00:19:58.056 --rc geninfo_all_blocks=1 00:19:58.056 --rc geninfo_unexecuted_blocks=1 00:19:58.056 00:19:58.056 ' 00:19:58.056 11:30:39 ublk -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:58.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:58.056 --rc genhtml_branch_coverage=1 00:19:58.056 --rc genhtml_function_coverage=1 00:19:58.056 --rc genhtml_legend=1 00:19:58.056 --rc geninfo_all_blocks=1 00:19:58.056 --rc geninfo_unexecuted_blocks=1 00:19:58.056 00:19:58.056 ' 00:19:58.057 11:30:39 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:19:58.057 11:30:39 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:19:58.057 11:30:39 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:19:58.057 11:30:39 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:19:58.057 11:30:39 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:19:58.057 11:30:39 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:19:58.057 11:30:39 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:19:58.057 11:30:39 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:19:58.057 11:30:39 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:19:58.057 11:30:39 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:19:58.057 11:30:39 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:19:58.057 11:30:39 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:19:58.057 11:30:39 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:19:58.057 11:30:39 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:19:58.057 11:30:39 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:19:58.057 11:30:39 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:19:58.057 11:30:39 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:19:58.057 11:30:39 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:19:58.057 11:30:39 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:19:58.057 11:30:39 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:19:58.057 11:30:39 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:58.057 11:30:39 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:58.057 11:30:39 ublk -- common/autotest_common.sh@10 -- # set +x 00:19:58.057 ************************************ 00:19:58.057 START TEST test_save_ublk_config 00:19:58.057 ************************************ 00:19:58.057 11:30:39 ublk.test_save_ublk_config -- common/autotest_common.sh@1125 -- # test_save_config 00:19:58.057 11:30:39 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:19:58.057 11:30:39 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=73142 00:19:58.057 11:30:39 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:19:58.057 11:30:39 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 73142 00:19:58.057 11:30:39 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 73142 ']' 00:19:58.057 11:30:39 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:58.057 11:30:39 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:58.057 11:30:39 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:58.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:58.057 11:30:39 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:58.057 11:30:39 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:19:58.057 11:30:39 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:19:58.315 [2024-10-07 11:30:39.842291] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:19:58.315 [2024-10-07 11:30:39.842431] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73142 ] 00:19:58.315 [2024-10-07 11:30:40.016954] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.574 [2024-10-07 11:30:40.249398] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:59.574 11:30:41 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:59.574 11:30:41 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:19:59.574 11:30:41 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:19:59.574 11:30:41 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:19:59.574 11:30:41 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.574 11:30:41 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:19:59.574 [2024-10-07 11:30:41.166768] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:19:59.574 [2024-10-07 11:30:41.167862] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:19:59.574 malloc0 00:19:59.574 [2024-10-07 11:30:41.253918] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:19:59.574 [2024-10-07 11:30:41.254023] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:19:59.574 [2024-10-07 11:30:41.254036] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:19:59.574 [2024-10-07 11:30:41.254048] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:19:59.574 [2024-10-07 11:30:41.263052] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:19:59.574 [2024-10-07 11:30:41.263081] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:19:59.574 [2024-10-07 11:30:41.269773] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:19:59.574 [2024-10-07 11:30:41.269886] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:19:59.832 [2024-10-07 11:30:41.287762] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:19:59.832 0 00:19:59.832 11:30:41 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.832 11:30:41 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:19:59.832 11:30:41 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.832 11:30:41 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:20:00.091 11:30:41 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.091 11:30:41 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:20:00.091 "subsystems": [ 00:20:00.091 { 00:20:00.091 "subsystem": "fsdev", 00:20:00.091 "config": [ 00:20:00.091 { 00:20:00.091 "method": "fsdev_set_opts", 00:20:00.091 "params": { 00:20:00.091 "fsdev_io_pool_size": 65535, 00:20:00.091 "fsdev_io_cache_size": 256 00:20:00.091 } 00:20:00.091 } 00:20:00.091 ] 00:20:00.091 }, 00:20:00.091 { 00:20:00.091 "subsystem": "keyring", 00:20:00.091 "config": [] 00:20:00.091 }, 00:20:00.091 { 00:20:00.091 "subsystem": "iobuf", 00:20:00.091 "config": [ 00:20:00.091 { 00:20:00.091 "method": "iobuf_set_options", 00:20:00.091 "params": { 00:20:00.091 "small_pool_count": 8192, 00:20:00.091 "large_pool_count": 1024, 00:20:00.091 "small_bufsize": 8192, 00:20:00.091 "large_bufsize": 135168 00:20:00.091 } 00:20:00.091 } 00:20:00.091 ] 00:20:00.091 }, 00:20:00.091 { 00:20:00.091 "subsystem": "sock", 00:20:00.091 "config": [ 00:20:00.091 { 00:20:00.091 "method": "sock_set_default_impl", 00:20:00.091 "params": { 00:20:00.091 "impl_name": "posix" 00:20:00.091 } 00:20:00.091 }, 00:20:00.091 { 00:20:00.091 "method": "sock_impl_set_options", 00:20:00.091 "params": { 00:20:00.091 "impl_name": "ssl", 00:20:00.091 "recv_buf_size": 4096, 00:20:00.091 "send_buf_size": 4096, 00:20:00.091 "enable_recv_pipe": true, 00:20:00.091 "enable_quickack": false, 00:20:00.091 "enable_placement_id": 0, 00:20:00.091 "enable_zerocopy_send_server": true, 00:20:00.091 "enable_zerocopy_send_client": false, 00:20:00.091 "zerocopy_threshold": 0, 00:20:00.092 "tls_version": 0, 00:20:00.092 "enable_ktls": false 00:20:00.092 } 00:20:00.092 }, 00:20:00.092 { 00:20:00.092 "method": "sock_impl_set_options", 00:20:00.092 "params": { 00:20:00.092 "impl_name": "posix", 00:20:00.092 "recv_buf_size": 2097152, 00:20:00.092 "send_buf_size": 2097152, 00:20:00.092 "enable_recv_pipe": true, 00:20:00.092 "enable_quickack": false, 00:20:00.092 "enable_placement_id": 0, 00:20:00.092 "enable_zerocopy_send_server": true, 00:20:00.092 "enable_zerocopy_send_client": false, 00:20:00.092 "zerocopy_threshold": 0, 00:20:00.092 "tls_version": 0, 00:20:00.092 "enable_ktls": false 00:20:00.092 } 00:20:00.092 } 00:20:00.092 ] 00:20:00.092 }, 00:20:00.092 { 00:20:00.092 "subsystem": "vmd", 00:20:00.092 "config": [] 00:20:00.092 }, 00:20:00.092 { 00:20:00.092 "subsystem": "accel", 00:20:00.092 "config": [ 00:20:00.092 { 00:20:00.092 "method": "accel_set_options", 00:20:00.092 "params": { 00:20:00.092 "small_cache_size": 128, 00:20:00.092 "large_cache_size": 16, 00:20:00.092 "task_count": 2048, 00:20:00.092 "sequence_count": 2048, 00:20:00.092 "buf_count": 2048 00:20:00.092 } 00:20:00.092 } 00:20:00.092 ] 00:20:00.092 }, 00:20:00.092 { 00:20:00.092 "subsystem": "bdev", 00:20:00.092 "config": [ 00:20:00.092 { 00:20:00.092 "method": "bdev_set_options", 00:20:00.092 "params": { 00:20:00.092 "bdev_io_pool_size": 65535, 00:20:00.092 "bdev_io_cache_size": 256, 00:20:00.092 "bdev_auto_examine": true, 00:20:00.092 "iobuf_small_cache_size": 128, 00:20:00.092 "iobuf_large_cache_size": 16 00:20:00.092 } 00:20:00.092 }, 00:20:00.092 { 00:20:00.092 "method": "bdev_raid_set_options", 00:20:00.092 "params": { 00:20:00.092 "process_window_size_kb": 1024, 00:20:00.092 "process_max_bandwidth_mb_sec": 0 00:20:00.092 } 00:20:00.092 }, 00:20:00.092 { 00:20:00.092 "method": "bdev_iscsi_set_options", 00:20:00.092 "params": { 00:20:00.092 "timeout_sec": 30 00:20:00.092 } 00:20:00.092 }, 00:20:00.092 { 00:20:00.092 "method": "bdev_nvme_set_options", 00:20:00.092 "params": { 00:20:00.092 "action_on_timeout": "none", 00:20:00.092 "timeout_us": 0, 00:20:00.092 "timeout_admin_us": 0, 00:20:00.092 "keep_alive_timeout_ms": 10000, 00:20:00.092 "arbitration_burst": 0, 00:20:00.092 "low_priority_weight": 0, 00:20:00.092 "medium_priority_weight": 0, 00:20:00.092 "high_priority_weight": 0, 00:20:00.092 "nvme_adminq_poll_period_us": 10000, 00:20:00.092 "nvme_ioq_poll_period_us": 0, 00:20:00.092 "io_queue_requests": 0, 00:20:00.092 "delay_cmd_submit": true, 00:20:00.092 "transport_retry_count": 4, 00:20:00.092 "bdev_retry_count": 3, 00:20:00.092 "transport_ack_timeout": 0, 00:20:00.092 "ctrlr_loss_timeout_sec": 0, 00:20:00.092 "reconnect_delay_sec": 0, 00:20:00.092 "fast_io_fail_timeout_sec": 0, 00:20:00.092 "disable_auto_failback": false, 00:20:00.092 "generate_uuids": false, 00:20:00.092 "transport_tos": 0, 00:20:00.092 "nvme_error_stat": false, 00:20:00.092 "rdma_srq_size": 0, 00:20:00.092 "io_path_stat": false, 00:20:00.092 "allow_accel_sequence": false, 00:20:00.092 "rdma_max_cq_size": 0, 00:20:00.092 "rdma_cm_event_timeout_ms": 0, 00:20:00.092 "dhchap_digests": [ 00:20:00.092 "sha256", 00:20:00.092 "sha384", 00:20:00.092 "sha512" 00:20:00.092 ], 00:20:00.092 "dhchap_dhgroups": [ 00:20:00.092 "null", 00:20:00.092 "ffdhe2048", 00:20:00.092 "ffdhe3072", 00:20:00.092 "ffdhe4096", 00:20:00.092 "ffdhe6144", 00:20:00.092 "ffdhe8192" 00:20:00.092 ] 00:20:00.092 } 00:20:00.092 }, 00:20:00.092 { 00:20:00.092 "method": "bdev_nvme_set_hotplug", 00:20:00.092 "params": { 00:20:00.092 "period_us": 100000, 00:20:00.092 "enable": false 00:20:00.092 } 00:20:00.092 }, 00:20:00.092 { 00:20:00.092 "method": "bdev_malloc_create", 00:20:00.092 "params": { 00:20:00.092 "name": "malloc0", 00:20:00.092 "num_blocks": 8192, 00:20:00.092 "block_size": 4096, 00:20:00.092 "physical_block_size": 4096, 00:20:00.092 "uuid": "be1a11d2-696a-40f1-a94c-532cc75b9f0b", 00:20:00.092 "optimal_io_boundary": 0, 00:20:00.092 "md_size": 0, 00:20:00.092 "dif_type": 0, 00:20:00.092 "dif_is_head_of_md": false, 00:20:00.092 "dif_pi_format": 0 00:20:00.092 } 00:20:00.092 }, 00:20:00.092 { 00:20:00.092 "method": "bdev_wait_for_examine" 00:20:00.092 } 00:20:00.092 ] 00:20:00.092 }, 00:20:00.092 { 00:20:00.092 "subsystem": "scsi", 00:20:00.092 "config": null 00:20:00.092 }, 00:20:00.092 { 00:20:00.092 "subsystem": "scheduler", 00:20:00.092 "config": [ 00:20:00.092 { 00:20:00.092 "method": "framework_set_scheduler", 00:20:00.092 "params": { 00:20:00.092 "name": "static" 00:20:00.092 } 00:20:00.092 } 00:20:00.092 ] 00:20:00.092 }, 00:20:00.092 { 00:20:00.092 "subsystem": "vhost_scsi", 00:20:00.092 "config": [] 00:20:00.092 }, 00:20:00.092 { 00:20:00.092 "subsystem": "vhost_blk", 00:20:00.092 "config": [] 00:20:00.092 }, 00:20:00.092 { 00:20:00.092 "subsystem": "ublk", 00:20:00.092 "config": [ 00:20:00.092 { 00:20:00.092 "method": "ublk_create_target", 00:20:00.092 "params": { 00:20:00.092 "cpumask": "1" 00:20:00.092 } 00:20:00.092 }, 00:20:00.092 { 00:20:00.092 "method": "ublk_start_disk", 00:20:00.092 "params": { 00:20:00.092 "bdev_name": "malloc0", 00:20:00.092 "ublk_id": 0, 00:20:00.092 "num_queues": 1, 00:20:00.092 "queue_depth": 128 00:20:00.092 } 00:20:00.092 } 00:20:00.092 ] 00:20:00.092 }, 00:20:00.092 { 00:20:00.092 "subsystem": "nbd", 00:20:00.092 "config": [] 00:20:00.092 }, 00:20:00.092 { 00:20:00.092 "subsystem": "nvmf", 00:20:00.092 "config": [ 00:20:00.092 { 00:20:00.092 "method": "nvmf_set_config", 00:20:00.092 "params": { 00:20:00.092 "discovery_filter": "match_any", 00:20:00.092 "admin_cmd_passthru": { 00:20:00.092 "identify_ctrlr": false 00:20:00.092 }, 00:20:00.092 "dhchap_digests": [ 00:20:00.092 "sha256", 00:20:00.092 "sha384", 00:20:00.092 "sha512" 00:20:00.092 ], 00:20:00.092 "dhchap_dhgroups": [ 00:20:00.092 "null", 00:20:00.092 "ffdhe2048", 00:20:00.092 "ffdhe3072", 00:20:00.092 "ffdhe4096", 00:20:00.092 "ffdhe6144", 00:20:00.092 "ffdhe8192" 00:20:00.092 ] 00:20:00.092 } 00:20:00.092 }, 00:20:00.092 { 00:20:00.092 "method": "nvmf_set_max_subsystems", 00:20:00.092 "params": { 00:20:00.092 "max_subsystems": 1024 00:20:00.092 } 00:20:00.092 }, 00:20:00.092 { 00:20:00.092 "method": "nvmf_set_crdt", 00:20:00.092 "params": { 00:20:00.092 "crdt1": 0, 00:20:00.092 "crdt2": 0, 00:20:00.092 "crdt3": 0 00:20:00.092 } 00:20:00.092 } 00:20:00.092 ] 00:20:00.092 }, 00:20:00.092 { 00:20:00.092 "subsystem": "iscsi", 00:20:00.092 "config": [ 00:20:00.092 { 00:20:00.092 "method": "iscsi_set_options", 00:20:00.092 "params": { 00:20:00.092 "node_base": "iqn.2016-06.io.spdk", 00:20:00.092 "max_sessions": 128, 00:20:00.092 "max_connections_per_session": 2, 00:20:00.092 "max_queue_depth": 64, 00:20:00.092 "default_time2wait": 2, 00:20:00.092 "default_time2retain": 20, 00:20:00.092 "first_burst_length": 8192, 00:20:00.092 "immediate_data": true, 00:20:00.092 "allow_duplicated_isid": false, 00:20:00.092 "error_recovery_level": 0, 00:20:00.092 "nop_timeout": 60, 00:20:00.092 "nop_in_interval": 30, 00:20:00.092 "disable_chap": false, 00:20:00.092 "require_chap": false, 00:20:00.092 "mutual_chap": false, 00:20:00.092 "chap_group": 0, 00:20:00.092 "max_large_datain_per_connection": 64, 00:20:00.092 "max_r2t_per_connection": 4, 00:20:00.092 "pdu_pool_size": 36864, 00:20:00.092 "immediate_data_pool_size": 16384, 00:20:00.092 "data_out_pool_size": 2048 00:20:00.092 } 00:20:00.092 } 00:20:00.092 ] 00:20:00.092 } 00:20:00.092 ] 00:20:00.092 }' 00:20:00.092 11:30:41 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 73142 00:20:00.092 11:30:41 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 73142 ']' 00:20:00.092 11:30:41 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 73142 00:20:00.092 11:30:41 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:20:00.092 11:30:41 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:00.092 11:30:41 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73142 00:20:00.092 11:30:41 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:00.092 killing process with pid 73142 00:20:00.092 11:30:41 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:00.092 11:30:41 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73142' 00:20:00.092 11:30:41 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 73142 00:20:00.092 11:30:41 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 73142 00:20:01.469 [2024-10-07 11:30:43.085971] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:20:01.469 [2024-10-07 11:30:43.123861] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:20:01.469 [2024-10-07 11:30:43.123986] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:20:01.469 [2024-10-07 11:30:43.131777] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:20:01.469 [2024-10-07 11:30:43.131824] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:20:01.469 [2024-10-07 11:30:43.131836] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:20:01.469 [2024-10-07 11:30:43.131857] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:20:01.469 [2024-10-07 11:30:43.131997] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:20:04.007 11:30:45 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=73216 00:20:04.007 11:30:45 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 73216 00:20:04.007 11:30:45 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 73216 ']' 00:20:04.007 11:30:45 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:04.007 11:30:45 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:04.007 11:30:45 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:04.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:04.007 11:30:45 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:04.007 11:30:45 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:20:04.007 11:30:45 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:20:04.007 "subsystems": [ 00:20:04.007 { 00:20:04.007 "subsystem": "fsdev", 00:20:04.007 "config": [ 00:20:04.007 { 00:20:04.007 "method": "fsdev_set_opts", 00:20:04.007 "params": { 00:20:04.007 "fsdev_io_pool_size": 65535, 00:20:04.007 "fsdev_io_cache_size": 256 00:20:04.007 } 00:20:04.007 } 00:20:04.007 ] 00:20:04.007 }, 00:20:04.007 { 00:20:04.007 "subsystem": "keyring", 00:20:04.007 "config": [] 00:20:04.007 }, 00:20:04.007 { 00:20:04.007 "subsystem": "iobuf", 00:20:04.007 "config": [ 00:20:04.007 { 00:20:04.007 "method": "iobuf_set_options", 00:20:04.007 "params": { 00:20:04.007 "small_pool_count": 8192, 00:20:04.007 "large_pool_count": 1024, 00:20:04.007 "small_bufsize": 8192, 00:20:04.007 "large_bufsize": 135168 00:20:04.007 } 00:20:04.007 } 00:20:04.007 ] 00:20:04.007 }, 00:20:04.007 { 00:20:04.007 "subsystem": "sock", 00:20:04.007 "config": [ 00:20:04.007 { 00:20:04.007 "method": "sock_set_default_impl", 00:20:04.007 "params": { 00:20:04.007 "impl_name": "posix" 00:20:04.007 } 00:20:04.007 }, 00:20:04.007 { 00:20:04.007 "method": "sock_impl_set_options", 00:20:04.007 "params": { 00:20:04.007 "impl_name": "ssl", 00:20:04.007 "recv_buf_size": 4096, 00:20:04.007 "send_buf_size": 4096, 00:20:04.007 "enable_recv_pipe": true, 00:20:04.007 "enable_quickack": false, 00:20:04.007 "enable_placement_id": 0, 00:20:04.007 "enable_zerocopy_send_server": true, 00:20:04.007 "enable_zerocopy_send_client": false, 00:20:04.007 "zerocopy_threshold": 0, 00:20:04.007 "tls_version": 0, 00:20:04.007 "enable_ktls": false 00:20:04.007 } 00:20:04.007 }, 00:20:04.007 { 00:20:04.007 "method": "sock_impl_set_options", 00:20:04.007 "params": { 00:20:04.007 "impl_name": "posix", 00:20:04.007 "recv_buf_size": 2097152, 00:20:04.007 "send_buf_size": 2097152, 00:20:04.007 "enable_recv_pipe": true, 00:20:04.007 "enable_quickack": false, 00:20:04.007 "enable_placement_id": 0, 00:20:04.007 "enable_zerocopy_send_server": true, 00:20:04.007 "enable_zerocopy_send_client": false, 00:20:04.007 "zerocopy_threshold": 0, 00:20:04.007 "tls_version": 0, 00:20:04.007 "enable_ktls": false 00:20:04.007 } 00:20:04.007 } 00:20:04.007 ] 00:20:04.007 }, 00:20:04.007 { 00:20:04.007 "subsystem": "vmd", 00:20:04.007 "config": [] 00:20:04.007 }, 00:20:04.007 { 00:20:04.007 "subsystem": "accel", 00:20:04.007 "config": [ 00:20:04.008 { 00:20:04.008 "method": "accel_set_options", 00:20:04.008 "params": { 00:20:04.008 "small_cache_size": 128, 00:20:04.008 "large_cache_size": 16, 00:20:04.008 "task_count": 2048, 00:20:04.008 "sequence_count": 2048, 00:20:04.008 "buf_count": 2048 00:20:04.008 } 00:20:04.008 } 00:20:04.008 ] 00:20:04.008 }, 00:20:04.008 { 00:20:04.008 "subsystem": "bdev", 00:20:04.008 "config": [ 00:20:04.008 { 00:20:04.008 "method": "bdev_set_options", 00:20:04.008 "params": { 00:20:04.008 "bdev_io_pool_size": 65535, 00:20:04.008 "bdev_io_cache_size": 256, 00:20:04.008 "bdev_auto_examine": true, 00:20:04.008 "iobuf_small_cache_size": 128, 00:20:04.008 "iobuf_large_cache_size": 16 00:20:04.008 } 00:20:04.008 }, 00:20:04.008 { 00:20:04.008 "method": "bdev_raid_set_options", 00:20:04.008 "params": { 00:20:04.008 "process_window_size_kb": 1024, 00:20:04.008 "process_max_bandwidth_mb_sec": 0 00:20:04.008 } 00:20:04.008 }, 00:20:04.008 { 00:20:04.008 "method": "bdev_iscsi_set_options", 00:20:04.008 "params": { 00:20:04.008 "timeout_sec": 30 00:20:04.008 } 00:20:04.008 }, 00:20:04.008 { 00:20:04.008 "method": "bdev_nvme_set_options", 00:20:04.008 "params": { 00:20:04.008 "action_on_timeout": "none", 00:20:04.008 "timeout_us": 0, 00:20:04.008 "timeout_admin_us": 0, 00:20:04.008 "keep_alive_timeout_ms": 10000, 00:20:04.008 "arbitration_burst": 0, 00:20:04.008 "low_priority_weight": 0, 00:20:04.008 "medium_priority_weight": 0, 00:20:04.008 "high_priority_weight": 0, 00:20:04.008 "nvme_adminq_poll_period_us": 10000, 00:20:04.008 "nvme_ioq_poll_period_us": 0, 00:20:04.008 "io_queue_requests": 0, 00:20:04.008 "delay_cmd_submit": true, 00:20:04.008 "transport_retry_count": 4, 00:20:04.008 "bdev_retry_count": 3, 00:20:04.008 "transport_ack_timeout": 0, 00:20:04.008 "ctrlr_loss_timeout_sec": 0, 00:20:04.008 "reconnect_delay_sec": 0, 00:20:04.008 "fast_io_fail_timeout_sec": 0, 00:20:04.008 "disable_auto_failback": false, 00:20:04.008 "generate_uuids": false, 00:20:04.008 "transport_tos": 0, 00:20:04.008 "nvme_error_stat": false, 00:20:04.008 "rdma_srq_size": 0, 00:20:04.008 "io_path_stat": false, 00:20:04.008 "allow_accel_sequence": false, 00:20:04.008 "rdma_max_cq_size": 0, 00:20:04.008 "rdma_cm_event_timeout_ms": 0, 00:20:04.008 "dhchap_digests": [ 00:20:04.008 "sha256", 00:20:04.008 "sha384", 00:20:04.008 "sha512" 00:20:04.008 ], 00:20:04.008 "dhchap_dhgroups": [ 00:20:04.008 "null", 00:20:04.008 "ffdhe2048", 00:20:04.008 "ffdhe3072", 00:20:04.008 "ffdhe4096", 00:20:04.008 "ffdhe6144", 00:20:04.008 "ffdhe8192" 00:20:04.008 ] 00:20:04.008 } 00:20:04.008 }, 00:20:04.008 { 00:20:04.008 "method": "bdev_nvme_set_hotplug", 00:20:04.008 "params": { 00:20:04.008 "period_us": 100000, 00:20:04.008 "enable": false 00:20:04.008 } 00:20:04.008 }, 00:20:04.008 { 00:20:04.008 "method": "bdev_malloc_create", 00:20:04.008 "params": { 00:20:04.008 "name": "malloc0", 00:20:04.008 "num_blocks": 8192, 00:20:04.008 "block_size": 4096, 00:20:04.008 "physical_block_size": 4096, 00:20:04.008 "uuid": "be1a11d2-696a-40f1-a94c-532cc75b9f0b", 00:20:04.008 "optimal_io_boundary": 0, 00:20:04.008 "md_size": 0, 00:20:04.008 "dif_type": 0, 00:20:04.008 "dif_is_head_of_md": false, 00:20:04.008 "dif_pi_format": 0 00:20:04.008 } 00:20:04.008 }, 00:20:04.008 { 00:20:04.008 "method": "bdev_wait_for_examine" 00:20:04.008 } 00:20:04.008 ] 00:20:04.008 }, 00:20:04.008 { 00:20:04.008 "subsystem": "scsi", 00:20:04.008 "config": null 00:20:04.008 }, 00:20:04.008 { 00:20:04.008 "subsystem": "scheduler", 00:20:04.008 "config": [ 00:20:04.008 { 00:20:04.008 "method": "framework_set_scheduler", 00:20:04.008 "params": { 00:20:04.008 "name": "static" 00:20:04.008 } 00:20:04.008 } 00:20:04.008 ] 00:20:04.008 }, 00:20:04.008 { 00:20:04.008 "subsystem": "vhost_scsi", 00:20:04.008 "config": [] 00:20:04.008 }, 00:20:04.008 { 00:20:04.008 "subsystem": "vhost_blk", 00:20:04.008 "config": [] 00:20:04.008 }, 00:20:04.008 { 00:20:04.008 "subsystem": "ublk", 00:20:04.008 "config": [ 00:20:04.008 { 00:20:04.008 "method": "ublk_create_target", 00:20:04.008 "params": { 00:20:04.008 "cpumask": "1" 00:20:04.008 } 00:20:04.008 }, 00:20:04.008 { 00:20:04.008 "method": "ublk_start_disk", 00:20:04.008 "params": { 00:20:04.008 "bdev_name": "malloc0", 00:20:04.008 "ublk_id": 0, 00:20:04.008 "num_queues": 1, 00:20:04.008 "queue_depth": 128 00:20:04.008 } 00:20:04.008 } 00:20:04.008 ] 00:20:04.008 }, 00:20:04.008 { 00:20:04.008 "subsystem": "nbd", 00:20:04.008 "config": [] 00:20:04.008 }, 00:20:04.008 { 00:20:04.008 "subsystem": "nvmf", 00:20:04.008 "config": [ 00:20:04.008 { 00:20:04.008 "method": "nvmf_set_config", 00:20:04.008 "params": { 00:20:04.008 "discovery_filter": "match_any", 00:20:04.008 "admin_cmd_passthru": { 00:20:04.008 "identify_ctrlr": false 00:20:04.008 }, 00:20:04.008 "dhchap_digests": [ 00:20:04.008 "sha256", 00:20:04.008 "sha384", 00:20:04.008 "sha512" 00:20:04.008 ], 00:20:04.008 "dhchap_dhgroups": [ 00:20:04.008 "null", 00:20:04.008 "ffdhe2048", 00:20:04.008 "ffdhe3072", 00:20:04.008 "ffdhe4096", 00:20:04.008 "ffdhe6144", 00:20:04.008 "ffdhe8192" 00:20:04.008 ] 00:20:04.008 } 00:20:04.008 }, 00:20:04.008 { 00:20:04.008 "method": "nvmf_set_max_subsystems", 00:20:04.008 "params": { 00:20:04.008 "max_subsystems": 1024 00:20:04.008 } 00:20:04.008 }, 00:20:04.008 { 00:20:04.008 "method": "nvmf_set_crdt", 00:20:04.008 "params": { 00:20:04.008 "crdt1": 0, 00:20:04.008 "crdt2": 0, 00:20:04.008 "crdt3": 0 00:20:04.008 } 00:20:04.008 } 00:20:04.008 ] 00:20:04.008 }, 00:20:04.008 { 00:20:04.008 "subsystem": "iscsi", 00:20:04.008 "config": [ 00:20:04.008 { 00:20:04.008 "method": "iscsi_set_options", 00:20:04.008 "params": { 00:20:04.008 "node_base": "iqn.2016-06.io.spdk", 00:20:04.008 "max_sessions": 128, 00:20:04.008 "max_connections_per_session": 2, 00:20:04.008 "max_queue_depth": 64, 00:20:04.008 "default_time2wait": 2, 00:20:04.008 "default_time2retain": 20, 00:20:04.008 "first_burst_length": 8192, 00:20:04.008 "immediate_data": true, 00:20:04.008 "allow_duplicated_isid": false, 00:20:04.008 "error_recovery_level": 0, 00:20:04.008 "nop_timeout": 60, 00:20:04.008 "nop_in_interval": 30, 00:20:04.008 "disable_chap": false, 00:20:04.008 "require_chap": false, 00:20:04.008 "mutual_chap": false, 00:20:04.008 "chap_group": 0, 00:20:04.008 "max_large_datain_per_connection": 64, 00:20:04.008 "max_r2t_per_connection": 4, 00:20:04.008 "pdu_pool_size": 36864, 00:20:04.008 "immediate_data_pool_size": 16384, 00:20:04.008 "data_out_pool_size": 2048 00:20:04.008 } 00:20:04.008 } 00:20:04.008 ] 00:20:04.008 } 00:20:04.008 ] 00:20:04.008 }' 00:20:04.008 11:30:45 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:20:04.008 [2024-10-07 11:30:45.263232] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:20:04.008 [2024-10-07 11:30:45.263363] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73216 ] 00:20:04.008 [2024-10-07 11:30:45.438177] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:04.008 [2024-10-07 11:30:45.668796] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:20:05.383 [2024-10-07 11:30:46.694758] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:20:05.383 [2024-10-07 11:30:46.695945] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:20:05.383 [2024-10-07 11:30:46.702886] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:20:05.383 [2024-10-07 11:30:46.702973] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:20:05.383 [2024-10-07 11:30:46.702982] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:20:05.383 [2024-10-07 11:30:46.702990] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:20:05.383 [2024-10-07 11:30:46.711836] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:20:05.383 [2024-10-07 11:30:46.711857] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:20:05.383 [2024-10-07 11:30:46.718769] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:20:05.383 [2024-10-07 11:30:46.718865] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:20:05.383 [2024-10-07 11:30:46.735756] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:20:05.383 11:30:46 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:05.383 11:30:46 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:20:05.383 11:30:46 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:20:05.383 11:30:46 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:20:05.383 11:30:46 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.383 11:30:46 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:20:05.383 11:30:46 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.383 11:30:46 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:20:05.383 11:30:46 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:20:05.383 11:30:46 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 73216 00:20:05.383 11:30:46 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 73216 ']' 00:20:05.383 11:30:46 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 73216 00:20:05.383 11:30:46 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:20:05.383 11:30:46 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:05.383 11:30:46 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73216 00:20:05.383 11:30:46 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:05.383 11:30:46 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:05.383 killing process with pid 73216 00:20:05.383 11:30:46 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73216' 00:20:05.383 11:30:46 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 73216 00:20:05.383 11:30:46 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 73216 00:20:06.771 [2024-10-07 11:30:48.428068] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:20:06.771 [2024-10-07 11:30:48.462772] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:20:06.771 [2024-10-07 11:30:48.462909] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:20:06.771 [2024-10-07 11:30:48.473775] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:20:06.771 [2024-10-07 11:30:48.473825] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:20:06.771 [2024-10-07 11:30:48.473834] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:20:06.771 [2024-10-07 11:30:48.473867] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:20:06.771 [2024-10-07 11:30:48.474009] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:20:09.329 11:30:50 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:20:09.329 00:20:09.329 real 0m10.770s 00:20:09.329 user 0m8.481s 00:20:09.329 sys 0m3.114s 00:20:09.329 11:30:50 ublk.test_save_ublk_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:09.329 ************************************ 00:20:09.329 END TEST test_save_ublk_config 00:20:09.329 11:30:50 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:20:09.329 ************************************ 00:20:09.329 11:30:50 ublk -- ublk/ublk.sh@139 -- # spdk_pid=73309 00:20:09.329 11:30:50 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:09.329 11:30:50 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:20:09.329 11:30:50 ublk -- ublk/ublk.sh@141 -- # waitforlisten 73309 00:20:09.329 11:30:50 ublk -- common/autotest_common.sh@831 -- # '[' -z 73309 ']' 00:20:09.329 11:30:50 ublk -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:09.329 11:30:50 ublk -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:09.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:09.329 11:30:50 ublk -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:09.329 11:30:50 ublk -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:09.329 11:30:50 ublk -- common/autotest_common.sh@10 -- # set +x 00:20:09.329 [2024-10-07 11:30:50.664411] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:20:09.329 [2024-10-07 11:30:50.665108] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73309 ] 00:20:09.329 [2024-10-07 11:30:50.840367] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:09.587 [2024-10-07 11:30:51.057627] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:20:09.587 [2024-10-07 11:30:51.057661] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:20:10.522 11:30:51 ublk -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:10.522 11:30:51 ublk -- common/autotest_common.sh@864 -- # return 0 00:20:10.522 11:30:51 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:20:10.522 11:30:51 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:20:10.522 11:30:51 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:10.522 11:30:51 ublk -- common/autotest_common.sh@10 -- # set +x 00:20:10.522 ************************************ 00:20:10.522 START TEST test_create_ublk 00:20:10.522 ************************************ 00:20:10.522 11:30:51 ublk.test_create_ublk -- common/autotest_common.sh@1125 -- # test_create_ublk 00:20:10.522 11:30:51 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:20:10.522 11:30:51 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.522 11:30:51 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:10.522 [2024-10-07 11:30:51.954762] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:20:10.522 [2024-10-07 11:30:51.956944] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:20:10.522 11:30:51 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.522 11:30:51 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:20:10.522 11:30:51 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:20:10.522 11:30:51 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.522 11:30:51 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:10.781 11:30:52 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.781 11:30:52 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:20:10.781 11:30:52 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:20:10.781 11:30:52 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.781 11:30:52 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:10.781 [2024-10-07 11:30:52.272935] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:20:10.781 [2024-10-07 11:30:52.273379] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:20:10.781 [2024-10-07 11:30:52.273399] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:20:10.781 [2024-10-07 11:30:52.273408] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:20:10.781 [2024-10-07 11:30:52.281139] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:20:10.781 [2024-10-07 11:30:52.281163] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:20:10.781 [2024-10-07 11:30:52.288777] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:20:10.781 [2024-10-07 11:30:52.289335] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:20:10.781 [2024-10-07 11:30:52.311777] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:20:10.781 11:30:52 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.781 11:30:52 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:20:10.781 11:30:52 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:20:10.781 11:30:52 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:20:10.781 11:30:52 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.781 11:30:52 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:10.781 11:30:52 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.781 11:30:52 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:20:10.781 { 00:20:10.781 "ublk_device": "/dev/ublkb0", 00:20:10.781 "id": 0, 00:20:10.781 "queue_depth": 512, 00:20:10.781 "num_queues": 4, 00:20:10.781 "bdev_name": "Malloc0" 00:20:10.781 } 00:20:10.781 ]' 00:20:10.781 11:30:52 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:20:10.781 11:30:52 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:20:10.781 11:30:52 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:20:10.781 11:30:52 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:20:10.781 11:30:52 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:20:10.781 11:30:52 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:20:10.781 11:30:52 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:20:11.039 11:30:52 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:20:11.039 11:30:52 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:20:11.039 11:30:52 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:20:11.039 11:30:52 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:20:11.039 11:30:52 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:20:11.039 11:30:52 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:20:11.039 11:30:52 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:20:11.039 11:30:52 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:20:11.039 11:30:52 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:20:11.039 11:30:52 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:20:11.039 11:30:52 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:20:11.039 11:30:52 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:20:11.039 11:30:52 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:20:11.039 11:30:52 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:20:11.039 11:30:52 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:20:11.039 fio: verification read phase will never start because write phase uses all of runtime 00:20:11.039 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:20:11.039 fio-3.35 00:20:11.039 Starting 1 process 00:20:23.232 00:20:23.232 fio_test: (groupid=0, jobs=1): err= 0: pid=73368: Mon Oct 7 11:31:02 2024 00:20:23.232 write: IOPS=15.6k, BW=60.8MiB/s (63.8MB/s)(609MiB/10001msec); 0 zone resets 00:20:23.232 clat (usec): min=39, max=4299, avg=63.38, stdev=99.20 00:20:23.232 lat (usec): min=39, max=4328, avg=63.84, stdev=99.21 00:20:23.232 clat percentiles (usec): 00:20:23.232 | 1.00th=[ 41], 5.00th=[ 52], 10.00th=[ 53], 20.00th=[ 55], 00:20:23.232 | 30.00th=[ 57], 40.00th=[ 58], 50.00th=[ 59], 60.00th=[ 60], 00:20:23.232 | 70.00th=[ 61], 80.00th=[ 63], 90.00th=[ 68], 95.00th=[ 73], 00:20:23.232 | 99.00th=[ 91], 99.50th=[ 103], 99.90th=[ 2040], 99.95th=[ 2868], 00:20:23.232 | 99.99th=[ 3490] 00:20:23.232 bw ( KiB/s): min=57312, max=73493, per=100.00%, avg=62609.32, stdev=3777.96, samples=19 00:20:23.232 iops : min=14328, max=18373, avg=15652.42, stdev=944.50, samples=19 00:20:23.232 lat (usec) : 50=3.39%, 100=96.07%, 250=0.32%, 500=0.02%, 750=0.01% 00:20:23.232 lat (usec) : 1000=0.02% 00:20:23.232 lat (msec) : 2=0.07%, 4=0.10%, 10=0.01% 00:20:23.232 cpu : usr=3.17%, sys=10.00%, ctx=155779, majf=0, minf=797 00:20:23.232 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:23.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:23.232 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:23.232 issued rwts: total=0,155782,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:23.232 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:23.232 00:20:23.232 Run status group 0 (all jobs): 00:20:23.232 WRITE: bw=60.8MiB/s (63.8MB/s), 60.8MiB/s-60.8MiB/s (63.8MB/s-63.8MB/s), io=609MiB (638MB), run=10001-10001msec 00:20:23.232 00:20:23.232 Disk stats (read/write): 00:20:23.232 ublkb0: ios=0/154312, merge=0/0, ticks=0/8640, in_queue=8640, util=99.14% 00:20:23.232 11:31:02 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:20:23.232 11:31:02 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.232 11:31:02 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:23.232 [2024-10-07 11:31:02.830330] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:20:23.232 [2024-10-07 11:31:02.866265] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:20:23.232 [2024-10-07 11:31:02.867149] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:20:23.232 [2024-10-07 11:31:02.876809] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:20:23.232 [2024-10-07 11:31:02.877122] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:20:23.232 [2024-10-07 11:31:02.877153] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:20:23.232 11:31:02 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.232 11:31:02 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:20:23.232 11:31:02 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # local es=0 00:20:23.232 11:31:02 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:20:23.232 11:31:02 ublk.test_create_ublk -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:23.232 11:31:02 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:23.232 11:31:02 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:23.232 11:31:02 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:23.232 11:31:02 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:20:23.232 11:31:02 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.232 11:31:02 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:23.232 [2024-10-07 11:31:02.900893] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:20:23.232 request: 00:20:23.232 { 00:20:23.232 "ublk_id": 0, 00:20:23.232 "method": "ublk_stop_disk", 00:20:23.232 "req_id": 1 00:20:23.232 } 00:20:23.232 Got JSON-RPC error response 00:20:23.232 response: 00:20:23.232 { 00:20:23.232 "code": -19, 00:20:23.232 "message": "No such device" 00:20:23.232 } 00:20:23.232 11:31:02 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:23.232 11:31:02 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # es=1 00:20:23.232 11:31:02 ublk.test_create_ublk -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:23.232 11:31:02 ublk.test_create_ublk -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:23.232 11:31:02 ublk.test_create_ublk -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:23.232 11:31:02 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:20:23.232 11:31:02 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.232 11:31:02 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:23.232 [2024-10-07 11:31:02.924891] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:20:23.232 [2024-10-07 11:31:02.928215] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:20:23.232 [2024-10-07 11:31:02.928270] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:20:23.232 11:31:02 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.232 11:31:02 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:23.232 11:31:02 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.232 11:31:02 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:23.232 11:31:03 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.232 11:31:03 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:20:23.232 11:31:03 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:20:23.232 11:31:03 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.232 11:31:03 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:23.232 11:31:03 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.232 11:31:03 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:20:23.232 11:31:03 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:20:23.232 11:31:03 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:20:23.232 11:31:03 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:20:23.232 11:31:03 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.232 11:31:03 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:23.232 11:31:03 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.232 11:31:03 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:20:23.232 11:31:03 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:20:23.232 ************************************ 00:20:23.232 END TEST test_create_ublk 00:20:23.232 ************************************ 00:20:23.232 11:31:03 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:20:23.232 00:20:23.232 real 0m11.835s 00:20:23.232 user 0m0.724s 00:20:23.232 sys 0m1.134s 00:20:23.232 11:31:03 ublk.test_create_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:23.232 11:31:03 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:23.232 11:31:03 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:20:23.232 11:31:03 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:20:23.232 11:31:03 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:23.232 11:31:03 ublk -- common/autotest_common.sh@10 -- # set +x 00:20:23.232 ************************************ 00:20:23.232 START TEST test_create_multi_ublk 00:20:23.232 ************************************ 00:20:23.232 11:31:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@1125 -- # test_create_multi_ublk 00:20:23.232 11:31:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:20:23.232 11:31:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.232 11:31:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:23.232 [2024-10-07 11:31:03.853758] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:20:23.232 [2024-10-07 11:31:03.855954] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:20:23.232 11:31:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.232 11:31:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:20:23.232 11:31:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:20:23.232 11:31:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:23.232 11:31:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:20:23.232 11:31:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.232 11:31:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:23.232 11:31:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.232 11:31:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:20:23.232 11:31:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:20:23.232 11:31:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.232 11:31:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:23.232 [2024-10-07 11:31:04.149930] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:20:23.232 [2024-10-07 11:31:04.150408] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:20:23.233 [2024-10-07 11:31:04.150421] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:20:23.233 [2024-10-07 11:31:04.150434] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:20:23.233 [2024-10-07 11:31:04.157802] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:20:23.233 [2024-10-07 11:31:04.157833] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:20:23.233 [2024-10-07 11:31:04.165781] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:20:23.233 [2024-10-07 11:31:04.166416] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:20:23.233 [2024-10-07 11:31:04.178110] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:20:23.233 11:31:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.233 11:31:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:20:23.233 11:31:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:23.233 11:31:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:20:23.233 11:31:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.233 11:31:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:23.233 11:31:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.233 11:31:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:20:23.233 11:31:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:20:23.233 11:31:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.233 11:31:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:23.233 [2024-10-07 11:31:04.488978] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:20:23.233 [2024-10-07 11:31:04.489543] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:20:23.233 [2024-10-07 11:31:04.489573] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:20:23.233 [2024-10-07 11:31:04.489602] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:20:23.233 [2024-10-07 11:31:04.498116] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:20:23.233 [2024-10-07 11:31:04.498143] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:20:23.233 [2024-10-07 11:31:04.504787] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:20:23.233 [2024-10-07 11:31:04.505365] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:20:23.233 [2024-10-07 11:31:04.513779] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:20:23.233 11:31:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.233 11:31:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:20:23.233 11:31:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:23.233 11:31:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:20:23.233 11:31:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.233 11:31:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:23.233 11:31:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.233 11:31:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:20:23.233 11:31:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:20:23.233 11:31:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.233 11:31:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:23.233 [2024-10-07 11:31:04.826936] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:20:23.233 [2024-10-07 11:31:04.827382] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:20:23.233 [2024-10-07 11:31:04.827394] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:20:23.233 [2024-10-07 11:31:04.827404] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:20:23.233 [2024-10-07 11:31:04.834793] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:20:23.233 [2024-10-07 11:31:04.834825] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:20:23.233 [2024-10-07 11:31:04.842804] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:20:23.233 [2024-10-07 11:31:04.843640] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:20:23.233 [2024-10-07 11:31:04.847373] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:20:23.233 11:31:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.233 11:31:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:20:23.233 11:31:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:23.233 11:31:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:20:23.233 11:31:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.233 11:31:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:23.490 11:31:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.491 11:31:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:20:23.491 11:31:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:20:23.491 11:31:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.491 11:31:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:23.491 [2024-10-07 11:31:05.165903] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:20:23.491 [2024-10-07 11:31:05.166483] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:20:23.491 [2024-10-07 11:31:05.166511] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:20:23.491 [2024-10-07 11:31:05.166525] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:20:23.491 [2024-10-07 11:31:05.175032] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:20:23.491 [2024-10-07 11:31:05.175053] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:20:23.491 [2024-10-07 11:31:05.181780] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:20:23.491 [2024-10-07 11:31:05.182353] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:20:23.491 [2024-10-07 11:31:05.190813] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:20:23.491 11:31:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.491 11:31:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:20:23.749 11:31:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:20:23.749 11:31:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.749 11:31:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:23.749 11:31:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.749 11:31:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:20:23.749 { 00:20:23.749 "ublk_device": "/dev/ublkb0", 00:20:23.749 "id": 0, 00:20:23.749 "queue_depth": 512, 00:20:23.749 "num_queues": 4, 00:20:23.749 "bdev_name": "Malloc0" 00:20:23.749 }, 00:20:23.749 { 00:20:23.749 "ublk_device": "/dev/ublkb1", 00:20:23.749 "id": 1, 00:20:23.749 "queue_depth": 512, 00:20:23.749 "num_queues": 4, 00:20:23.749 "bdev_name": "Malloc1" 00:20:23.749 }, 00:20:23.749 { 00:20:23.749 "ublk_device": "/dev/ublkb2", 00:20:23.749 "id": 2, 00:20:23.749 "queue_depth": 512, 00:20:23.749 "num_queues": 4, 00:20:23.749 "bdev_name": "Malloc2" 00:20:23.749 }, 00:20:23.749 { 00:20:23.749 "ublk_device": "/dev/ublkb3", 00:20:23.749 "id": 3, 00:20:23.749 "queue_depth": 512, 00:20:23.749 "num_queues": 4, 00:20:23.749 "bdev_name": "Malloc3" 00:20:23.749 } 00:20:23.749 ]' 00:20:23.749 11:31:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:20:23.749 11:31:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:23.749 11:31:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:20:23.749 11:31:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:20:23.749 11:31:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:20:23.749 11:31:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:20:23.749 11:31:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:20:23.749 11:31:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:20:23.749 11:31:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:20:23.749 11:31:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:20:23.749 11:31:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:20:23.749 11:31:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:20:23.749 11:31:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:23.749 11:31:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:20:24.007 11:31:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:20:24.007 11:31:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:20:24.007 11:31:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:20:24.007 11:31:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:20:24.007 11:31:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:20:24.007 11:31:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:20:24.007 11:31:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:20:24.007 11:31:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:20:24.007 11:31:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:20:24.007 11:31:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:24.007 11:31:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:20:24.007 11:31:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:20:24.007 11:31:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:20:24.265 11:31:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:20:24.265 11:31:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:20:24.265 11:31:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:20:24.265 11:31:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:20:24.265 11:31:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:20:24.265 11:31:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:20:24.265 11:31:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:20:24.265 11:31:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:24.265 11:31:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:20:24.265 11:31:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:20:24.265 11:31:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:20:24.265 11:31:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:20:24.265 11:31:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:20:24.523 11:31:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:20:24.523 11:31:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:20:24.523 11:31:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:20:24.523 11:31:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:20:24.523 11:31:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:20:24.523 11:31:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:20:24.523 11:31:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:20:24.523 11:31:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:24.523 11:31:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:20:24.523 11:31:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.523 11:31:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:24.523 [2024-10-07 11:31:06.102932] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:20:24.523 [2024-10-07 11:31:06.142776] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:20:24.523 [2024-10-07 11:31:06.143730] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:20:24.523 [2024-10-07 11:31:06.150802] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:20:24.523 [2024-10-07 11:31:06.151086] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:20:24.523 [2024-10-07 11:31:06.151101] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:20:24.523 11:31:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.523 11:31:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:24.523 11:31:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:20:24.523 11:31:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.523 11:31:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:24.523 [2024-10-07 11:31:06.165850] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:20:24.523 [2024-10-07 11:31:06.198230] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:20:24.523 [2024-10-07 11:31:06.199206] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:20:24.523 [2024-10-07 11:31:06.205787] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:20:24.523 [2024-10-07 11:31:06.206045] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:20:24.523 [2024-10-07 11:31:06.206058] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:20:24.523 11:31:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.523 11:31:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:24.523 11:31:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:20:24.523 11:31:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.523 11:31:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:24.523 [2024-10-07 11:31:06.221859] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:20:24.782 [2024-10-07 11:31:06.263195] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:20:24.782 [2024-10-07 11:31:06.264151] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:20:24.782 [2024-10-07 11:31:06.271803] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:20:24.782 [2024-10-07 11:31:06.272072] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:20:24.782 [2024-10-07 11:31:06.272085] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:20:24.782 11:31:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.782 11:31:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:24.782 11:31:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:20:24.782 11:31:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.782 11:31:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:24.782 [2024-10-07 11:31:06.286882] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:20:24.782 [2024-10-07 11:31:06.326220] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:20:24.782 [2024-10-07 11:31:06.327120] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:20:24.782 [2024-10-07 11:31:06.334802] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:20:24.782 [2024-10-07 11:31:06.335073] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:20:24.782 [2024-10-07 11:31:06.335086] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:20:24.782 11:31:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.782 11:31:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:20:25.040 [2024-10-07 11:31:06.541880] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:20:25.040 [2024-10-07 11:31:06.545124] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:20:25.040 [2024-10-07 11:31:06.545166] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:20:25.040 11:31:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:20:25.040 11:31:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:25.040 11:31:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:25.040 11:31:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.040 11:31:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:25.606 11:31:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.606 11:31:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:25.606 11:31:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:20:25.606 11:31:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.606 11:31:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:26.173 11:31:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.173 11:31:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:26.173 11:31:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:20:26.173 11:31:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.173 11:31:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:26.431 11:31:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.431 11:31:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:26.431 11:31:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:20:26.431 11:31:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.431 11:31:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:27.001 11:31:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.001 11:31:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:20:27.001 11:31:08 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:20:27.001 11:31:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.001 11:31:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:27.001 11:31:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.001 11:31:08 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:20:27.001 11:31:08 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:20:27.001 11:31:08 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:20:27.001 11:31:08 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:20:27.001 11:31:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.001 11:31:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:27.001 11:31:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.001 11:31:08 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:20:27.001 11:31:08 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:20:27.001 11:31:08 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:20:27.001 00:20:27.001 real 0m4.708s 00:20:27.001 user 0m1.049s 00:20:27.001 sys 0m0.239s 00:20:27.001 ************************************ 00:20:27.001 END TEST test_create_multi_ublk 00:20:27.001 ************************************ 00:20:27.001 11:31:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:27.001 11:31:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:27.001 11:31:08 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:20:27.001 11:31:08 ublk -- ublk/ublk.sh@147 -- # cleanup 00:20:27.001 11:31:08 ublk -- ublk/ublk.sh@130 -- # killprocess 73309 00:20:27.001 11:31:08 ublk -- common/autotest_common.sh@950 -- # '[' -z 73309 ']' 00:20:27.001 11:31:08 ublk -- common/autotest_common.sh@954 -- # kill -0 73309 00:20:27.001 11:31:08 ublk -- common/autotest_common.sh@955 -- # uname 00:20:27.001 11:31:08 ublk -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:27.001 11:31:08 ublk -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73309 00:20:27.001 11:31:08 ublk -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:27.001 killing process with pid 73309 00:20:27.001 11:31:08 ublk -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:27.001 11:31:08 ublk -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73309' 00:20:27.001 11:31:08 ublk -- common/autotest_common.sh@969 -- # kill 73309 00:20:27.001 11:31:08 ublk -- common/autotest_common.sh@974 -- # wait 73309 00:20:28.374 [2024-10-07 11:31:09.829259] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:20:28.374 [2024-10-07 11:31:09.829328] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:20:29.749 00:20:29.749 real 0m31.794s 00:20:29.749 user 0m45.439s 00:20:29.749 sys 0m10.387s 00:20:29.749 11:31:11 ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:29.749 ************************************ 00:20:29.749 END TEST ublk 00:20:29.749 ************************************ 00:20:29.749 11:31:11 ublk -- common/autotest_common.sh@10 -- # set +x 00:20:29.749 11:31:11 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:20:29.749 11:31:11 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:20:29.749 11:31:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:29.749 11:31:11 -- common/autotest_common.sh@10 -- # set +x 00:20:29.749 ************************************ 00:20:29.749 START TEST ublk_recovery 00:20:29.749 ************************************ 00:20:29.749 11:31:11 ublk_recovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:20:29.749 * Looking for test storage... 00:20:29.749 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:20:29.749 11:31:11 ublk_recovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:29.749 11:31:11 ublk_recovery -- common/autotest_common.sh@1681 -- # lcov --version 00:20:29.749 11:31:11 ublk_recovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:30.008 11:31:11 ublk_recovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:30.008 11:31:11 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:30.008 11:31:11 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:30.008 11:31:11 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:30.008 11:31:11 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:20:30.009 11:31:11 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:20:30.009 11:31:11 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:20:30.009 11:31:11 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:20:30.009 11:31:11 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:20:30.009 11:31:11 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:20:30.009 11:31:11 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:20:30.009 11:31:11 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:30.009 11:31:11 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:20:30.009 11:31:11 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:20:30.009 11:31:11 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:30.009 11:31:11 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:30.009 11:31:11 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:20:30.009 11:31:11 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:20:30.009 11:31:11 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:30.009 11:31:11 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:20:30.009 11:31:11 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:20:30.009 11:31:11 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:20:30.009 11:31:11 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:20:30.009 11:31:11 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:30.009 11:31:11 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:20:30.009 11:31:11 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:20:30.009 11:31:11 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:30.009 11:31:11 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:30.009 11:31:11 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:20:30.009 11:31:11 ublk_recovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:30.009 11:31:11 ublk_recovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:30.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:30.009 --rc genhtml_branch_coverage=1 00:20:30.009 --rc genhtml_function_coverage=1 00:20:30.009 --rc genhtml_legend=1 00:20:30.009 --rc geninfo_all_blocks=1 00:20:30.009 --rc geninfo_unexecuted_blocks=1 00:20:30.009 00:20:30.009 ' 00:20:30.009 11:31:11 ublk_recovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:30.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:30.009 --rc genhtml_branch_coverage=1 00:20:30.009 --rc genhtml_function_coverage=1 00:20:30.009 --rc genhtml_legend=1 00:20:30.009 --rc geninfo_all_blocks=1 00:20:30.009 --rc geninfo_unexecuted_blocks=1 00:20:30.009 00:20:30.009 ' 00:20:30.009 11:31:11 ublk_recovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:30.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:30.009 --rc genhtml_branch_coverage=1 00:20:30.009 --rc genhtml_function_coverage=1 00:20:30.009 --rc genhtml_legend=1 00:20:30.009 --rc geninfo_all_blocks=1 00:20:30.009 --rc geninfo_unexecuted_blocks=1 00:20:30.009 00:20:30.009 ' 00:20:30.009 11:31:11 ublk_recovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:30.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:30.009 --rc genhtml_branch_coverage=1 00:20:30.009 --rc genhtml_function_coverage=1 00:20:30.009 --rc genhtml_legend=1 00:20:30.009 --rc geninfo_all_blocks=1 00:20:30.009 --rc geninfo_unexecuted_blocks=1 00:20:30.009 00:20:30.009 ' 00:20:30.009 11:31:11 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:20:30.009 11:31:11 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:20:30.009 11:31:11 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:20:30.009 11:31:11 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:20:30.009 11:31:11 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:20:30.009 11:31:11 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:20:30.009 11:31:11 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:20:30.009 11:31:11 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:20:30.009 11:31:11 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:20:30.009 11:31:11 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:20:30.009 11:31:11 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:20:30.009 11:31:11 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=73746 00:20:30.009 11:31:11 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:30.009 11:31:11 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 73746 00:20:30.009 11:31:11 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 73746 ']' 00:20:30.009 11:31:11 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:30.009 11:31:11 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:30.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:30.009 11:31:11 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:30.009 11:31:11 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:30.009 11:31:11 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:30.009 [2024-10-07 11:31:11.672950] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:20:30.009 [2024-10-07 11:31:11.673131] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73746 ] 00:20:30.268 [2024-10-07 11:31:11.856244] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:30.525 [2024-10-07 11:31:12.077937] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:20:30.525 [2024-10-07 11:31:12.077972] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:20:31.458 11:31:12 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:31.458 11:31:12 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:20:31.458 11:31:12 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:20:31.458 11:31:12 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.458 11:31:12 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:31.458 [2024-10-07 11:31:12.960762] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:20:31.458 [2024-10-07 11:31:12.963000] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:20:31.458 11:31:12 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.458 11:31:12 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:20:31.458 11:31:12 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.458 11:31:12 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:31.458 malloc0 00:20:31.458 11:31:13 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.458 11:31:13 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:20:31.458 11:31:13 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.458 11:31:13 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:31.458 [2024-10-07 11:31:13.104944] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:20:31.458 [2024-10-07 11:31:13.105060] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:20:31.458 [2024-10-07 11:31:13.105075] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:20:31.458 [2024-10-07 11:31:13.105084] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:20:31.458 [2024-10-07 11:31:13.113875] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:20:31.458 [2024-10-07 11:31:13.113903] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:20:31.458 [2024-10-07 11:31:13.120770] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:20:31.458 [2024-10-07 11:31:13.120916] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:20:31.458 [2024-10-07 11:31:13.135771] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:20:31.458 1 00:20:31.458 11:31:13 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.458 11:31:13 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:20:32.831 11:31:14 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=73782 00:20:32.831 11:31:14 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:20:32.831 11:31:14 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:20:32.831 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:32.831 fio-3.35 00:20:32.831 Starting 1 process 00:20:38.171 11:31:19 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 73746 00:20:38.171 11:31:19 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:20:43.441 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 73746 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:20:43.441 11:31:24 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:20:43.441 11:31:24 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=73893 00:20:43.441 11:31:24 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:43.441 11:31:24 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 73893 00:20:43.441 11:31:24 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 73893 ']' 00:20:43.441 11:31:24 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:43.441 11:31:24 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:43.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:43.441 11:31:24 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:43.441 11:31:24 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:43.441 11:31:24 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:43.441 [2024-10-07 11:31:24.284712] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:20:43.441 [2024-10-07 11:31:24.284894] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73893 ] 00:20:43.441 [2024-10-07 11:31:24.467661] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:43.441 [2024-10-07 11:31:24.685715] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:20:43.441 [2024-10-07 11:31:24.686171] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:20:44.014 11:31:25 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:44.014 11:31:25 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:20:44.014 11:31:25 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:20:44.014 11:31:25 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.014 11:31:25 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:44.014 [2024-10-07 11:31:25.573763] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:20:44.014 [2024-10-07 11:31:25.575922] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:20:44.014 11:31:25 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.014 11:31:25 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:20:44.014 11:31:25 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.014 11:31:25 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:44.014 malloc0 00:20:44.014 11:31:25 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.014 11:31:25 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:20:44.014 11:31:25 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.014 11:31:25 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:44.284 [2024-10-07 11:31:25.723927] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:20:44.284 [2024-10-07 11:31:25.723970] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:20:44.284 [2024-10-07 11:31:25.723982] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:20:44.284 [2024-10-07 11:31:25.731794] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:20:44.284 [2024-10-07 11:31:25.731821] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:20:44.284 [2024-10-07 11:31:25.731831] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:20:44.284 [2024-10-07 11:31:25.731921] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:20:44.284 1 00:20:44.285 11:31:25 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.285 11:31:25 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 73782 00:20:44.285 [2024-10-07 11:31:25.739780] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:20:44.285 [2024-10-07 11:31:25.743995] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:20:44.285 [2024-10-07 11:31:25.753954] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:20:44.285 [2024-10-07 11:31:25.753998] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:21:40.603 00:21:40.603 fio_test: (groupid=0, jobs=1): err= 0: pid=73785: Mon Oct 7 11:32:14 2024 00:21:40.603 read: IOPS=21.4k, BW=83.5MiB/s (87.5MB/s)(5009MiB/60003msec) 00:21:40.603 slat (nsec): min=1798, max=332661, avg=7375.69, stdev=2440.24 00:21:40.603 clat (usec): min=941, max=6612.6k, avg=2935.02, stdev=45951.27 00:21:40.603 lat (usec): min=945, max=6612.6k, avg=2942.40, stdev=45951.27 00:21:40.603 clat percentiles (usec): 00:21:40.603 | 1.00th=[ 1991], 5.00th=[ 2180], 10.00th=[ 2245], 20.00th=[ 2311], 00:21:40.603 | 30.00th=[ 2343], 40.00th=[ 2376], 50.00th=[ 2409], 60.00th=[ 2442], 00:21:40.603 | 70.00th=[ 2507], 80.00th=[ 2933], 90.00th=[ 3163], 95.00th=[ 3785], 00:21:40.603 | 99.00th=[ 5014], 99.50th=[ 5538], 99.90th=[ 6783], 99.95th=[ 7635], 00:21:40.603 | 99.99th=[12911] 00:21:40.603 bw ( KiB/s): min=12520, max=103536, per=100.00%, avg=95030.18, stdev=12881.29, samples=107 00:21:40.603 iops : min= 3130, max=25884, avg=23757.50, stdev=3220.31, samples=107 00:21:40.603 write: IOPS=21.4k, BW=83.4MiB/s (87.5MB/s)(5005MiB/60003msec); 0 zone resets 00:21:40.603 slat (nsec): min=1877, max=1506.1k, avg=7407.15, stdev=2824.79 00:21:40.603 clat (usec): min=745, max=6612.9k, avg=3040.34, stdev=47431.68 00:21:40.603 lat (usec): min=749, max=6613.0k, avg=3047.75, stdev=47431.68 00:21:40.603 clat percentiles (usec): 00:21:40.603 | 1.00th=[ 1991], 5.00th=[ 2180], 10.00th=[ 2311], 20.00th=[ 2409], 00:21:40.603 | 30.00th=[ 2442], 40.00th=[ 2474], 50.00th=[ 2507], 60.00th=[ 2540], 00:21:40.603 | 70.00th=[ 2638], 80.00th=[ 3032], 90.00th=[ 3294], 95.00th=[ 3785], 00:21:40.603 | 99.00th=[ 5080], 99.50th=[ 5604], 99.90th=[ 6915], 99.95th=[ 7767], 00:21:40.603 | 99.99th=[13042] 00:21:40.603 bw ( KiB/s): min=12376, max=104776, per=100.00%, avg=94961.75, stdev=12850.38, samples=107 00:21:40.603 iops : min= 3094, max=26194, avg=23740.40, stdev=3212.60, samples=107 00:21:40.603 lat (usec) : 750=0.01%, 1000=0.01% 00:21:40.603 lat (msec) : 2=1.07%, 4=94.88%, 10=4.03%, 20=0.01%, >=2000=0.01% 00:21:40.603 cpu : usr=11.76%, sys=32.03%, ctx=108830, majf=0, minf=13 00:21:40.603 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:21:40.603 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:40.603 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:40.603 issued rwts: total=1282412,1281307,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:40.603 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:40.603 00:21:40.603 Run status group 0 (all jobs): 00:21:40.603 READ: bw=83.5MiB/s (87.5MB/s), 83.5MiB/s-83.5MiB/s (87.5MB/s-87.5MB/s), io=5009MiB (5253MB), run=60003-60003msec 00:21:40.604 WRITE: bw=83.4MiB/s (87.5MB/s), 83.4MiB/s-83.4MiB/s (87.5MB/s-87.5MB/s), io=5005MiB (5248MB), run=60003-60003msec 00:21:40.604 00:21:40.604 Disk stats (read/write): 00:21:40.604 ublkb1: ios=1279526/1278416, merge=0/0, ticks=3647310/3646495, in_queue=7293805, util=99.94% 00:21:40.604 11:32:14 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:21:40.604 11:32:14 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.604 11:32:14 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:21:40.604 [2024-10-07 11:32:14.429344] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:21:40.604 [2024-10-07 11:32:14.458801] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:21:40.604 [2024-10-07 11:32:14.459027] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:21:40.604 [2024-10-07 11:32:14.465768] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:21:40.604 [2024-10-07 11:32:14.465928] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:21:40.604 [2024-10-07 11:32:14.465948] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:21:40.604 11:32:14 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.604 11:32:14 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:21:40.604 11:32:14 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.604 11:32:14 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:21:40.604 [2024-10-07 11:32:14.474878] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:21:40.604 [2024-10-07 11:32:14.481873] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:21:40.604 [2024-10-07 11:32:14.481922] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:21:40.604 11:32:14 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.604 11:32:14 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:21:40.604 11:32:14 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:21:40.604 11:32:14 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 73893 00:21:40.604 11:32:14 ublk_recovery -- common/autotest_common.sh@950 -- # '[' -z 73893 ']' 00:21:40.604 11:32:14 ublk_recovery -- common/autotest_common.sh@954 -- # kill -0 73893 00:21:40.604 11:32:14 ublk_recovery -- common/autotest_common.sh@955 -- # uname 00:21:40.604 11:32:14 ublk_recovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:40.604 11:32:14 ublk_recovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73893 00:21:40.604 11:32:14 ublk_recovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:40.604 11:32:14 ublk_recovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:40.604 killing process with pid 73893 00:21:40.604 11:32:14 ublk_recovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73893' 00:21:40.604 11:32:14 ublk_recovery -- common/autotest_common.sh@969 -- # kill 73893 00:21:40.604 11:32:14 ublk_recovery -- common/autotest_common.sh@974 -- # wait 73893 00:21:40.604 [2024-10-07 11:32:16.168356] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:21:40.604 [2024-10-07 11:32:16.168420] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:21:40.604 00:21:40.604 real 1m6.441s 00:21:40.604 user 1m49.782s 00:21:40.604 sys 0m38.104s 00:21:40.604 11:32:17 ublk_recovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:40.604 11:32:17 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:21:40.604 ************************************ 00:21:40.604 END TEST ublk_recovery 00:21:40.604 ************************************ 00:21:40.604 11:32:17 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:21:40.604 11:32:17 -- spdk/autotest.sh@256 -- # timing_exit lib 00:21:40.604 11:32:17 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:40.604 11:32:17 -- common/autotest_common.sh@10 -- # set +x 00:21:40.604 11:32:17 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:21:40.604 11:32:17 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:21:40.604 11:32:17 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:21:40.604 11:32:17 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:21:40.604 11:32:17 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:21:40.604 11:32:17 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:21:40.604 11:32:17 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:21:40.604 11:32:17 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:21:40.604 11:32:17 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:21:40.604 11:32:17 -- spdk/autotest.sh@338 -- # '[' 1 -eq 1 ']' 00:21:40.604 11:32:17 -- spdk/autotest.sh@339 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:21:40.604 11:32:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:40.604 11:32:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:40.604 11:32:17 -- common/autotest_common.sh@10 -- # set +x 00:21:40.604 ************************************ 00:21:40.604 START TEST ftl 00:21:40.604 ************************************ 00:21:40.604 11:32:17 ftl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:21:40.604 * Looking for test storage... 00:21:40.604 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:40.604 11:32:18 ftl -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:40.604 11:32:18 ftl -- common/autotest_common.sh@1681 -- # lcov --version 00:21:40.604 11:32:18 ftl -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:40.604 11:32:18 ftl -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:40.604 11:32:18 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:40.604 11:32:18 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:40.604 11:32:18 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:40.604 11:32:18 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:21:40.604 11:32:18 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:21:40.604 11:32:18 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:21:40.604 11:32:18 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:21:40.604 11:32:18 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:21:40.604 11:32:18 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:21:40.604 11:32:18 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:21:40.604 11:32:18 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:40.604 11:32:18 ftl -- scripts/common.sh@344 -- # case "$op" in 00:21:40.604 11:32:18 ftl -- scripts/common.sh@345 -- # : 1 00:21:40.604 11:32:18 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:40.604 11:32:18 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:40.604 11:32:18 ftl -- scripts/common.sh@365 -- # decimal 1 00:21:40.604 11:32:18 ftl -- scripts/common.sh@353 -- # local d=1 00:21:40.604 11:32:18 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:40.604 11:32:18 ftl -- scripts/common.sh@355 -- # echo 1 00:21:40.604 11:32:18 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:21:40.604 11:32:18 ftl -- scripts/common.sh@366 -- # decimal 2 00:21:40.604 11:32:18 ftl -- scripts/common.sh@353 -- # local d=2 00:21:40.604 11:32:18 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:40.604 11:32:18 ftl -- scripts/common.sh@355 -- # echo 2 00:21:40.604 11:32:18 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:21:40.604 11:32:18 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:40.604 11:32:18 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:40.604 11:32:18 ftl -- scripts/common.sh@368 -- # return 0 00:21:40.604 11:32:18 ftl -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:40.604 11:32:18 ftl -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:40.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:40.604 --rc genhtml_branch_coverage=1 00:21:40.604 --rc genhtml_function_coverage=1 00:21:40.604 --rc genhtml_legend=1 00:21:40.604 --rc geninfo_all_blocks=1 00:21:40.604 --rc geninfo_unexecuted_blocks=1 00:21:40.604 00:21:40.604 ' 00:21:40.604 11:32:18 ftl -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:40.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:40.604 --rc genhtml_branch_coverage=1 00:21:40.604 --rc genhtml_function_coverage=1 00:21:40.604 --rc genhtml_legend=1 00:21:40.604 --rc geninfo_all_blocks=1 00:21:40.604 --rc geninfo_unexecuted_blocks=1 00:21:40.604 00:21:40.604 ' 00:21:40.604 11:32:18 ftl -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:40.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:40.604 --rc genhtml_branch_coverage=1 00:21:40.604 --rc genhtml_function_coverage=1 00:21:40.604 --rc genhtml_legend=1 00:21:40.604 --rc geninfo_all_blocks=1 00:21:40.604 --rc geninfo_unexecuted_blocks=1 00:21:40.604 00:21:40.604 ' 00:21:40.604 11:32:18 ftl -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:40.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:40.604 --rc genhtml_branch_coverage=1 00:21:40.604 --rc genhtml_function_coverage=1 00:21:40.604 --rc genhtml_legend=1 00:21:40.604 --rc geninfo_all_blocks=1 00:21:40.604 --rc geninfo_unexecuted_blocks=1 00:21:40.604 00:21:40.604 ' 00:21:40.604 11:32:18 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:40.604 11:32:18 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:21:40.604 11:32:18 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:40.604 11:32:18 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:40.604 11:32:18 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:40.604 11:32:18 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:40.604 11:32:18 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:40.604 11:32:18 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:40.604 11:32:18 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:40.604 11:32:18 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:40.604 11:32:18 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:40.604 11:32:18 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:40.604 11:32:18 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:40.604 11:32:18 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:40.604 11:32:18 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:40.604 11:32:18 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:40.604 11:32:18 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:40.604 11:32:18 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:40.604 11:32:18 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:40.604 11:32:18 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:40.604 11:32:18 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:40.604 11:32:18 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:40.604 11:32:18 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:40.604 11:32:18 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:40.604 11:32:18 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:40.604 11:32:18 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:40.604 11:32:18 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:40.604 11:32:18 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:40.605 11:32:18 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:40.605 11:32:18 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:40.605 11:32:18 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:21:40.605 11:32:18 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:21:40.605 11:32:18 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:21:40.605 11:32:18 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:21:40.605 11:32:18 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:40.605 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:40.605 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:40.605 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:40.605 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:40.605 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:40.605 11:32:18 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=74704 00:21:40.605 11:32:18 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:21:40.605 11:32:18 ftl -- ftl/ftl.sh@38 -- # waitforlisten 74704 00:21:40.605 11:32:18 ftl -- common/autotest_common.sh@831 -- # '[' -z 74704 ']' 00:21:40.605 11:32:18 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:40.605 11:32:18 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:40.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:40.605 11:32:18 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:40.605 11:32:18 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:40.605 11:32:18 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:40.605 [2024-10-07 11:32:19.041748] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:21:40.605 [2024-10-07 11:32:19.041881] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74704 ] 00:21:40.605 [2024-10-07 11:32:19.201925] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:40.605 [2024-10-07 11:32:19.415671] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:21:40.605 11:32:19 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:40.605 11:32:19 ftl -- common/autotest_common.sh@864 -- # return 0 00:21:40.605 11:32:19 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:21:40.605 11:32:20 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:21:40.605 11:32:21 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:21:40.605 11:32:21 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:21:40.605 11:32:21 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:21:40.605 11:32:21 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:21:40.605 11:32:21 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:21:40.605 11:32:21 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:21:40.605 11:32:21 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:21:40.605 11:32:21 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:21:40.605 11:32:21 ftl -- ftl/ftl.sh@50 -- # break 00:21:40.605 11:32:21 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:21:40.605 11:32:21 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:21:40.605 11:32:21 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:21:40.605 11:32:21 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:21:40.605 11:32:22 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:21:40.605 11:32:22 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:21:40.605 11:32:22 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:21:40.605 11:32:22 ftl -- ftl/ftl.sh@63 -- # break 00:21:40.605 11:32:22 ftl -- ftl/ftl.sh@66 -- # killprocess 74704 00:21:40.605 11:32:22 ftl -- common/autotest_common.sh@950 -- # '[' -z 74704 ']' 00:21:40.605 11:32:22 ftl -- common/autotest_common.sh@954 -- # kill -0 74704 00:21:40.605 11:32:22 ftl -- common/autotest_common.sh@955 -- # uname 00:21:40.605 11:32:22 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:40.605 11:32:22 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74704 00:21:40.605 11:32:22 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:40.605 killing process with pid 74704 00:21:40.605 11:32:22 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:40.605 11:32:22 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74704' 00:21:40.605 11:32:22 ftl -- common/autotest_common.sh@969 -- # kill 74704 00:21:40.605 11:32:22 ftl -- common/autotest_common.sh@974 -- # wait 74704 00:21:43.164 11:32:24 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:21:43.164 11:32:24 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:21:43.164 11:32:24 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:21:43.164 11:32:24 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:43.164 11:32:24 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:43.164 ************************************ 00:21:43.164 START TEST ftl_fio_basic 00:21:43.164 ************************************ 00:21:43.164 11:32:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:21:43.164 * Looking for test storage... 00:21:43.164 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:43.164 11:32:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:43.164 11:32:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # lcov --version 00:21:43.164 11:32:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:43.424 11:32:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:43.424 11:32:24 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:43.424 11:32:24 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:43.424 11:32:24 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:43.424 11:32:24 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:21:43.424 11:32:24 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:21:43.424 11:32:24 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:21:43.424 11:32:24 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:21:43.424 11:32:24 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:21:43.424 11:32:24 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:21:43.424 11:32:24 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:21:43.424 11:32:24 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:43.424 11:32:24 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:21:43.424 11:32:24 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:21:43.424 11:32:24 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:43.424 11:32:24 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:43.424 11:32:24 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:21:43.424 11:32:24 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:21:43.424 11:32:24 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:43.424 11:32:24 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:21:43.424 11:32:24 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:21:43.424 11:32:24 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:21:43.424 11:32:24 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:21:43.424 11:32:24 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:43.424 11:32:24 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:21:43.424 11:32:24 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:21:43.424 11:32:24 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:43.424 11:32:24 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:43.424 11:32:24 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:21:43.424 11:32:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:43.424 11:32:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:43.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:43.424 --rc genhtml_branch_coverage=1 00:21:43.424 --rc genhtml_function_coverage=1 00:21:43.424 --rc genhtml_legend=1 00:21:43.424 --rc geninfo_all_blocks=1 00:21:43.424 --rc geninfo_unexecuted_blocks=1 00:21:43.424 00:21:43.424 ' 00:21:43.424 11:32:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:43.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:43.424 --rc genhtml_branch_coverage=1 00:21:43.424 --rc genhtml_function_coverage=1 00:21:43.424 --rc genhtml_legend=1 00:21:43.424 --rc geninfo_all_blocks=1 00:21:43.424 --rc geninfo_unexecuted_blocks=1 00:21:43.424 00:21:43.424 ' 00:21:43.424 11:32:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:43.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:43.424 --rc genhtml_branch_coverage=1 00:21:43.424 --rc genhtml_function_coverage=1 00:21:43.424 --rc genhtml_legend=1 00:21:43.424 --rc geninfo_all_blocks=1 00:21:43.424 --rc geninfo_unexecuted_blocks=1 00:21:43.424 00:21:43.424 ' 00:21:43.424 11:32:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:43.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:43.424 --rc genhtml_branch_coverage=1 00:21:43.424 --rc genhtml_function_coverage=1 00:21:43.424 --rc genhtml_legend=1 00:21:43.424 --rc geninfo_all_blocks=1 00:21:43.424 --rc geninfo_unexecuted_blocks=1 00:21:43.424 00:21:43.424 ' 00:21:43.424 11:32:24 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:43.424 11:32:24 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:21:43.424 11:32:24 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:43.424 11:32:24 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:43.424 11:32:24 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:43.424 11:32:24 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:43.424 11:32:24 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:43.424 11:32:24 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:43.424 11:32:24 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:43.424 11:32:24 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:43.424 11:32:24 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:43.424 11:32:24 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:43.424 11:32:24 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:43.424 11:32:24 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:43.424 11:32:24 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:43.424 11:32:24 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:43.424 11:32:24 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:43.424 11:32:24 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:43.424 11:32:24 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:43.424 11:32:24 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:43.424 11:32:24 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:43.424 11:32:24 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:43.425 11:32:24 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:43.425 11:32:24 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:43.425 11:32:24 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:43.425 11:32:24 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:43.425 11:32:24 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:43.425 11:32:24 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:43.425 11:32:24 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:43.425 11:32:24 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:21:43.425 11:32:24 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:21:43.425 11:32:24 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:21:43.425 11:32:24 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:21:43.425 11:32:24 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:43.425 11:32:24 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:21:43.425 11:32:24 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:21:43.425 11:32:24 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:21:43.425 11:32:24 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:21:43.425 11:32:24 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:21:43.425 11:32:24 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:21:43.425 11:32:24 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:21:43.425 11:32:24 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:21:43.425 11:32:24 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:21:43.425 11:32:24 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:43.425 11:32:24 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:43.425 11:32:24 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:21:43.425 11:32:24 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=74853 00:21:43.425 11:32:24 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:21:43.425 11:32:24 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 74853 00:21:43.425 11:32:24 ftl.ftl_fio_basic -- common/autotest_common.sh@831 -- # '[' -z 74853 ']' 00:21:43.425 11:32:24 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:43.425 11:32:24 ftl.ftl_fio_basic -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:43.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:43.425 11:32:24 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:43.425 11:32:24 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:43.425 11:32:24 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:21:43.425 [2024-10-07 11:32:25.073265] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:21:43.425 [2024-10-07 11:32:25.073398] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74853 ] 00:21:43.684 [2024-10-07 11:32:25.247946] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:43.941 [2024-10-07 11:32:25.468307] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:21:43.941 [2024-10-07 11:32:25.468456] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:21:43.941 [2024-10-07 11:32:25.468487] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:21:44.873 11:32:26 ftl.ftl_fio_basic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:44.873 11:32:26 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # return 0 00:21:44.873 11:32:26 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:21:44.873 11:32:26 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:21:44.873 11:32:26 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:21:44.873 11:32:26 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:21:44.873 11:32:26 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:21:44.873 11:32:26 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:21:45.130 11:32:26 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:21:45.130 11:32:26 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:21:45.130 11:32:26 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:21:45.130 11:32:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:21:45.130 11:32:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:45.130 11:32:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:21:45.130 11:32:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:21:45.130 11:32:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:21:45.389 11:32:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:45.389 { 00:21:45.389 "name": "nvme0n1", 00:21:45.389 "aliases": [ 00:21:45.389 "0e15acc9-1027-4ca5-af7f-e1f76df52875" 00:21:45.389 ], 00:21:45.389 "product_name": "NVMe disk", 00:21:45.389 "block_size": 4096, 00:21:45.389 "num_blocks": 1310720, 00:21:45.389 "uuid": "0e15acc9-1027-4ca5-af7f-e1f76df52875", 00:21:45.389 "numa_id": -1, 00:21:45.389 "assigned_rate_limits": { 00:21:45.389 "rw_ios_per_sec": 0, 00:21:45.389 "rw_mbytes_per_sec": 0, 00:21:45.389 "r_mbytes_per_sec": 0, 00:21:45.389 "w_mbytes_per_sec": 0 00:21:45.389 }, 00:21:45.389 "claimed": false, 00:21:45.389 "zoned": false, 00:21:45.389 "supported_io_types": { 00:21:45.389 "read": true, 00:21:45.389 "write": true, 00:21:45.389 "unmap": true, 00:21:45.389 "flush": true, 00:21:45.389 "reset": true, 00:21:45.389 "nvme_admin": true, 00:21:45.389 "nvme_io": true, 00:21:45.389 "nvme_io_md": false, 00:21:45.389 "write_zeroes": true, 00:21:45.389 "zcopy": false, 00:21:45.389 "get_zone_info": false, 00:21:45.389 "zone_management": false, 00:21:45.389 "zone_append": false, 00:21:45.389 "compare": true, 00:21:45.389 "compare_and_write": false, 00:21:45.389 "abort": true, 00:21:45.389 "seek_hole": false, 00:21:45.389 "seek_data": false, 00:21:45.389 "copy": true, 00:21:45.389 "nvme_iov_md": false 00:21:45.389 }, 00:21:45.389 "driver_specific": { 00:21:45.389 "nvme": [ 00:21:45.389 { 00:21:45.389 "pci_address": "0000:00:11.0", 00:21:45.389 "trid": { 00:21:45.389 "trtype": "PCIe", 00:21:45.389 "traddr": "0000:00:11.0" 00:21:45.389 }, 00:21:45.389 "ctrlr_data": { 00:21:45.389 "cntlid": 0, 00:21:45.389 "vendor_id": "0x1b36", 00:21:45.389 "model_number": "QEMU NVMe Ctrl", 00:21:45.389 "serial_number": "12341", 00:21:45.389 "firmware_revision": "8.0.0", 00:21:45.389 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:45.389 "oacs": { 00:21:45.389 "security": 0, 00:21:45.389 "format": 1, 00:21:45.389 "firmware": 0, 00:21:45.389 "ns_manage": 1 00:21:45.389 }, 00:21:45.389 "multi_ctrlr": false, 00:21:45.389 "ana_reporting": false 00:21:45.389 }, 00:21:45.389 "vs": { 00:21:45.389 "nvme_version": "1.4" 00:21:45.389 }, 00:21:45.389 "ns_data": { 00:21:45.389 "id": 1, 00:21:45.389 "can_share": false 00:21:45.389 } 00:21:45.389 } 00:21:45.389 ], 00:21:45.389 "mp_policy": "active_passive" 00:21:45.389 } 00:21:45.389 } 00:21:45.389 ]' 00:21:45.389 11:32:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:45.389 11:32:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:21:45.389 11:32:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:45.389 11:32:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=1310720 00:21:45.389 11:32:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:21:45.389 11:32:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 5120 00:21:45.389 11:32:26 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:21:45.389 11:32:26 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:21:45.389 11:32:26 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:21:45.389 11:32:26 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:45.389 11:32:26 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:45.646 11:32:27 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:21:45.646 11:32:27 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:21:45.902 11:32:27 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=efc7a176-4bb5-4321-980c-959951fbec96 00:21:45.902 11:32:27 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u efc7a176-4bb5-4321-980c-959951fbec96 00:21:45.902 11:32:27 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=afedc9af-9a4a-4b13-a3a2-ac298999c7b8 00:21:45.902 11:32:27 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 afedc9af-9a4a-4b13-a3a2-ac298999c7b8 00:21:45.902 11:32:27 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:21:45.902 11:32:27 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:21:45.902 11:32:27 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=afedc9af-9a4a-4b13-a3a2-ac298999c7b8 00:21:45.902 11:32:27 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:21:45.902 11:32:27 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size afedc9af-9a4a-4b13-a3a2-ac298999c7b8 00:21:45.902 11:32:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=afedc9af-9a4a-4b13-a3a2-ac298999c7b8 00:21:45.902 11:32:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:45.902 11:32:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:21:45.902 11:32:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:21:45.902 11:32:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b afedc9af-9a4a-4b13-a3a2-ac298999c7b8 00:21:46.160 11:32:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:46.160 { 00:21:46.160 "name": "afedc9af-9a4a-4b13-a3a2-ac298999c7b8", 00:21:46.160 "aliases": [ 00:21:46.160 "lvs/nvme0n1p0" 00:21:46.160 ], 00:21:46.160 "product_name": "Logical Volume", 00:21:46.160 "block_size": 4096, 00:21:46.160 "num_blocks": 26476544, 00:21:46.160 "uuid": "afedc9af-9a4a-4b13-a3a2-ac298999c7b8", 00:21:46.160 "assigned_rate_limits": { 00:21:46.160 "rw_ios_per_sec": 0, 00:21:46.160 "rw_mbytes_per_sec": 0, 00:21:46.160 "r_mbytes_per_sec": 0, 00:21:46.160 "w_mbytes_per_sec": 0 00:21:46.160 }, 00:21:46.160 "claimed": false, 00:21:46.160 "zoned": false, 00:21:46.160 "supported_io_types": { 00:21:46.160 "read": true, 00:21:46.160 "write": true, 00:21:46.160 "unmap": true, 00:21:46.160 "flush": false, 00:21:46.160 "reset": true, 00:21:46.160 "nvme_admin": false, 00:21:46.160 "nvme_io": false, 00:21:46.160 "nvme_io_md": false, 00:21:46.160 "write_zeroes": true, 00:21:46.160 "zcopy": false, 00:21:46.160 "get_zone_info": false, 00:21:46.160 "zone_management": false, 00:21:46.160 "zone_append": false, 00:21:46.160 "compare": false, 00:21:46.160 "compare_and_write": false, 00:21:46.160 "abort": false, 00:21:46.160 "seek_hole": true, 00:21:46.160 "seek_data": true, 00:21:46.160 "copy": false, 00:21:46.160 "nvme_iov_md": false 00:21:46.160 }, 00:21:46.160 "driver_specific": { 00:21:46.160 "lvol": { 00:21:46.160 "lvol_store_uuid": "efc7a176-4bb5-4321-980c-959951fbec96", 00:21:46.160 "base_bdev": "nvme0n1", 00:21:46.160 "thin_provision": true, 00:21:46.160 "num_allocated_clusters": 0, 00:21:46.160 "snapshot": false, 00:21:46.160 "clone": false, 00:21:46.160 "esnap_clone": false 00:21:46.160 } 00:21:46.160 } 00:21:46.160 } 00:21:46.160 ]' 00:21:46.160 11:32:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:46.160 11:32:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:21:46.160 11:32:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:46.417 11:32:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:21:46.417 11:32:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:21:46.417 11:32:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:21:46.417 11:32:27 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:21:46.417 11:32:27 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:21:46.417 11:32:27 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:21:46.674 11:32:28 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:46.674 11:32:28 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:46.674 11:32:28 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size afedc9af-9a4a-4b13-a3a2-ac298999c7b8 00:21:46.674 11:32:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=afedc9af-9a4a-4b13-a3a2-ac298999c7b8 00:21:46.674 11:32:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:46.674 11:32:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:21:46.674 11:32:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:21:46.674 11:32:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b afedc9af-9a4a-4b13-a3a2-ac298999c7b8 00:21:46.931 11:32:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:46.931 { 00:21:46.931 "name": "afedc9af-9a4a-4b13-a3a2-ac298999c7b8", 00:21:46.931 "aliases": [ 00:21:46.931 "lvs/nvme0n1p0" 00:21:46.931 ], 00:21:46.931 "product_name": "Logical Volume", 00:21:46.931 "block_size": 4096, 00:21:46.931 "num_blocks": 26476544, 00:21:46.931 "uuid": "afedc9af-9a4a-4b13-a3a2-ac298999c7b8", 00:21:46.931 "assigned_rate_limits": { 00:21:46.931 "rw_ios_per_sec": 0, 00:21:46.931 "rw_mbytes_per_sec": 0, 00:21:46.931 "r_mbytes_per_sec": 0, 00:21:46.931 "w_mbytes_per_sec": 0 00:21:46.931 }, 00:21:46.931 "claimed": false, 00:21:46.931 "zoned": false, 00:21:46.931 "supported_io_types": { 00:21:46.931 "read": true, 00:21:46.931 "write": true, 00:21:46.931 "unmap": true, 00:21:46.931 "flush": false, 00:21:46.931 "reset": true, 00:21:46.931 "nvme_admin": false, 00:21:46.931 "nvme_io": false, 00:21:46.931 "nvme_io_md": false, 00:21:46.931 "write_zeroes": true, 00:21:46.931 "zcopy": false, 00:21:46.931 "get_zone_info": false, 00:21:46.931 "zone_management": false, 00:21:46.931 "zone_append": false, 00:21:46.931 "compare": false, 00:21:46.931 "compare_and_write": false, 00:21:46.931 "abort": false, 00:21:46.931 "seek_hole": true, 00:21:46.931 "seek_data": true, 00:21:46.931 "copy": false, 00:21:46.931 "nvme_iov_md": false 00:21:46.931 }, 00:21:46.931 "driver_specific": { 00:21:46.931 "lvol": { 00:21:46.931 "lvol_store_uuid": "efc7a176-4bb5-4321-980c-959951fbec96", 00:21:46.931 "base_bdev": "nvme0n1", 00:21:46.931 "thin_provision": true, 00:21:46.931 "num_allocated_clusters": 0, 00:21:46.931 "snapshot": false, 00:21:46.931 "clone": false, 00:21:46.931 "esnap_clone": false 00:21:46.931 } 00:21:46.931 } 00:21:46.931 } 00:21:46.931 ]' 00:21:46.931 11:32:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:46.931 11:32:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:21:46.931 11:32:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:46.931 11:32:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:21:46.931 11:32:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:21:46.931 11:32:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:21:46.931 11:32:28 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:21:46.931 11:32:28 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:47.188 11:32:28 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:21:47.188 11:32:28 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:21:47.188 11:32:28 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:21:47.188 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:21:47.188 11:32:28 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size afedc9af-9a4a-4b13-a3a2-ac298999c7b8 00:21:47.188 11:32:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=afedc9af-9a4a-4b13-a3a2-ac298999c7b8 00:21:47.188 11:32:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:47.188 11:32:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:21:47.188 11:32:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:21:47.188 11:32:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b afedc9af-9a4a-4b13-a3a2-ac298999c7b8 00:21:47.444 11:32:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:47.444 { 00:21:47.444 "name": "afedc9af-9a4a-4b13-a3a2-ac298999c7b8", 00:21:47.444 "aliases": [ 00:21:47.444 "lvs/nvme0n1p0" 00:21:47.444 ], 00:21:47.444 "product_name": "Logical Volume", 00:21:47.444 "block_size": 4096, 00:21:47.444 "num_blocks": 26476544, 00:21:47.444 "uuid": "afedc9af-9a4a-4b13-a3a2-ac298999c7b8", 00:21:47.444 "assigned_rate_limits": { 00:21:47.444 "rw_ios_per_sec": 0, 00:21:47.444 "rw_mbytes_per_sec": 0, 00:21:47.444 "r_mbytes_per_sec": 0, 00:21:47.444 "w_mbytes_per_sec": 0 00:21:47.444 }, 00:21:47.444 "claimed": false, 00:21:47.444 "zoned": false, 00:21:47.444 "supported_io_types": { 00:21:47.444 "read": true, 00:21:47.444 "write": true, 00:21:47.444 "unmap": true, 00:21:47.444 "flush": false, 00:21:47.444 "reset": true, 00:21:47.444 "nvme_admin": false, 00:21:47.444 "nvme_io": false, 00:21:47.444 "nvme_io_md": false, 00:21:47.444 "write_zeroes": true, 00:21:47.444 "zcopy": false, 00:21:47.444 "get_zone_info": false, 00:21:47.444 "zone_management": false, 00:21:47.444 "zone_append": false, 00:21:47.444 "compare": false, 00:21:47.444 "compare_and_write": false, 00:21:47.444 "abort": false, 00:21:47.444 "seek_hole": true, 00:21:47.444 "seek_data": true, 00:21:47.444 "copy": false, 00:21:47.444 "nvme_iov_md": false 00:21:47.444 }, 00:21:47.444 "driver_specific": { 00:21:47.444 "lvol": { 00:21:47.444 "lvol_store_uuid": "efc7a176-4bb5-4321-980c-959951fbec96", 00:21:47.444 "base_bdev": "nvme0n1", 00:21:47.444 "thin_provision": true, 00:21:47.444 "num_allocated_clusters": 0, 00:21:47.444 "snapshot": false, 00:21:47.444 "clone": false, 00:21:47.445 "esnap_clone": false 00:21:47.445 } 00:21:47.445 } 00:21:47.445 } 00:21:47.445 ]' 00:21:47.445 11:32:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:47.445 11:32:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:21:47.445 11:32:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:47.445 11:32:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:21:47.445 11:32:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:21:47.445 11:32:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:21:47.445 11:32:28 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:21:47.445 11:32:28 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:21:47.445 11:32:28 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d afedc9af-9a4a-4b13-a3a2-ac298999c7b8 -c nvc0n1p0 --l2p_dram_limit 60 00:21:47.711 [2024-10-07 11:32:29.183983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.711 [2024-10-07 11:32:29.184430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:47.711 [2024-10-07 11:32:29.184515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:47.711 [2024-10-07 11:32:29.184576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.711 [2024-10-07 11:32:29.184725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.711 [2024-10-07 11:32:29.184816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:47.711 [2024-10-07 11:32:29.184874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:21:47.711 [2024-10-07 11:32:29.184923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.711 [2024-10-07 11:32:29.185028] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:47.711 [2024-10-07 11:32:29.186231] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:47.711 [2024-10-07 11:32:29.186358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.711 [2024-10-07 11:32:29.186410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:47.711 [2024-10-07 11:32:29.186466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.354 ms 00:21:47.711 [2024-10-07 11:32:29.186517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.711 [2024-10-07 11:32:29.186724] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 105a4659-e98c-4c6a-8ba5-b73e9fdeb412 00:21:47.711 [2024-10-07 11:32:29.188293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.711 [2024-10-07 11:32:29.188392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:47.711 [2024-10-07 11:32:29.188451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:21:47.711 [2024-10-07 11:32:29.188510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.711 [2024-10-07 11:32:29.196224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.711 [2024-10-07 11:32:29.196264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:47.711 [2024-10-07 11:32:29.196277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.592 ms 00:21:47.711 [2024-10-07 11:32:29.196290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.711 [2024-10-07 11:32:29.196416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.711 [2024-10-07 11:32:29.196433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:47.711 [2024-10-07 11:32:29.196444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:21:47.711 [2024-10-07 11:32:29.196461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.711 [2024-10-07 11:32:29.196535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.711 [2024-10-07 11:32:29.196550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:47.711 [2024-10-07 11:32:29.196561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:21:47.711 [2024-10-07 11:32:29.196573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.711 [2024-10-07 11:32:29.196621] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:47.711 [2024-10-07 11:32:29.202000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.711 [2024-10-07 11:32:29.202038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:47.711 [2024-10-07 11:32:29.202055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.405 ms 00:21:47.711 [2024-10-07 11:32:29.202066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.711 [2024-10-07 11:32:29.202114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.711 [2024-10-07 11:32:29.202126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:47.711 [2024-10-07 11:32:29.202139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:47.711 [2024-10-07 11:32:29.202149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.711 [2024-10-07 11:32:29.202219] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:47.711 [2024-10-07 11:32:29.202372] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:47.711 [2024-10-07 11:32:29.202394] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:47.711 [2024-10-07 11:32:29.202409] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:47.711 [2024-10-07 11:32:29.202425] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:47.711 [2024-10-07 11:32:29.202441] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:47.711 [2024-10-07 11:32:29.202456] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:47.711 [2024-10-07 11:32:29.202467] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:47.711 [2024-10-07 11:32:29.202480] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:47.711 [2024-10-07 11:32:29.202490] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:47.711 [2024-10-07 11:32:29.202503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.711 [2024-10-07 11:32:29.202514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:47.711 [2024-10-07 11:32:29.202527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:21:47.711 [2024-10-07 11:32:29.202537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.711 [2024-10-07 11:32:29.202624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.711 [2024-10-07 11:32:29.202637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:47.711 [2024-10-07 11:32:29.202653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:21:47.711 [2024-10-07 11:32:29.202663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.711 [2024-10-07 11:32:29.202785] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:47.711 [2024-10-07 11:32:29.202805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:47.711 [2024-10-07 11:32:29.202819] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:47.711 [2024-10-07 11:32:29.202829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:47.711 [2024-10-07 11:32:29.202843] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:47.711 [2024-10-07 11:32:29.202852] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:47.711 [2024-10-07 11:32:29.202864] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:47.711 [2024-10-07 11:32:29.202873] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:47.711 [2024-10-07 11:32:29.202885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:47.711 [2024-10-07 11:32:29.202895] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:47.711 [2024-10-07 11:32:29.202906] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:47.711 [2024-10-07 11:32:29.202916] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:47.711 [2024-10-07 11:32:29.202928] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:47.711 [2024-10-07 11:32:29.202937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:47.711 [2024-10-07 11:32:29.202949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:47.711 [2024-10-07 11:32:29.202958] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:47.711 [2024-10-07 11:32:29.202972] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:47.711 [2024-10-07 11:32:29.202981] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:47.712 [2024-10-07 11:32:29.202993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:47.712 [2024-10-07 11:32:29.203002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:47.712 [2024-10-07 11:32:29.203015] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:47.712 [2024-10-07 11:32:29.203024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:47.712 [2024-10-07 11:32:29.203036] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:47.712 [2024-10-07 11:32:29.203046] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:47.712 [2024-10-07 11:32:29.203057] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:47.712 [2024-10-07 11:32:29.203069] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:47.712 [2024-10-07 11:32:29.203081] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:47.712 [2024-10-07 11:32:29.203090] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:47.712 [2024-10-07 11:32:29.203103] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:47.712 [2024-10-07 11:32:29.203113] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:47.712 [2024-10-07 11:32:29.203124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:47.712 [2024-10-07 11:32:29.203133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:47.712 [2024-10-07 11:32:29.203147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:47.712 [2024-10-07 11:32:29.203156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:47.712 [2024-10-07 11:32:29.203168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:47.712 [2024-10-07 11:32:29.203177] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:47.712 [2024-10-07 11:32:29.203188] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:47.712 [2024-10-07 11:32:29.203197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:47.712 [2024-10-07 11:32:29.203209] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:47.712 [2024-10-07 11:32:29.203234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:47.712 [2024-10-07 11:32:29.203246] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:47.712 [2024-10-07 11:32:29.203256] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:47.712 [2024-10-07 11:32:29.203268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:47.712 [2024-10-07 11:32:29.203277] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:47.712 [2024-10-07 11:32:29.203289] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:47.712 [2024-10-07 11:32:29.203304] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:47.712 [2024-10-07 11:32:29.203319] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:47.712 [2024-10-07 11:32:29.203329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:47.712 [2024-10-07 11:32:29.203344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:47.712 [2024-10-07 11:32:29.203353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:47.712 [2024-10-07 11:32:29.203365] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:47.712 [2024-10-07 11:32:29.203374] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:47.712 [2024-10-07 11:32:29.203385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:47.712 [2024-10-07 11:32:29.203399] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:47.712 [2024-10-07 11:32:29.203414] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:47.712 [2024-10-07 11:32:29.203426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:47.712 [2024-10-07 11:32:29.203439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:47.712 [2024-10-07 11:32:29.203451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:47.712 [2024-10-07 11:32:29.203464] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:47.712 [2024-10-07 11:32:29.203474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:47.712 [2024-10-07 11:32:29.203488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:47.712 [2024-10-07 11:32:29.203499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:47.712 [2024-10-07 11:32:29.203511] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:47.712 [2024-10-07 11:32:29.203521] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:47.712 [2024-10-07 11:32:29.203537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:47.712 [2024-10-07 11:32:29.203548] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:47.712 [2024-10-07 11:32:29.203562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:47.712 [2024-10-07 11:32:29.203572] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:47.712 [2024-10-07 11:32:29.203585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:47.712 [2024-10-07 11:32:29.203594] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:47.712 [2024-10-07 11:32:29.203608] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:47.712 [2024-10-07 11:32:29.203619] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:47.712 [2024-10-07 11:32:29.203632] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:47.712 [2024-10-07 11:32:29.203642] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:47.712 [2024-10-07 11:32:29.203654] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:47.712 [2024-10-07 11:32:29.203666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.712 [2024-10-07 11:32:29.203679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:47.712 [2024-10-07 11:32:29.203689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.954 ms 00:21:47.712 [2024-10-07 11:32:29.203702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.712 [2024-10-07 11:32:29.203785] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:21:47.712 [2024-10-07 11:32:29.203807] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:21:53.004 [2024-10-07 11:32:34.647747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.004 [2024-10-07 11:32:34.647805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:53.005 [2024-10-07 11:32:34.647823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5452.789 ms 00:21:53.005 [2024-10-07 11:32:34.647838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.005 [2024-10-07 11:32:34.698115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.005 [2024-10-07 11:32:34.698177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:53.005 [2024-10-07 11:32:34.698198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.008 ms 00:21:53.005 [2024-10-07 11:32:34.698215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.005 [2024-10-07 11:32:34.698423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.005 [2024-10-07 11:32:34.698456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:53.005 [2024-10-07 11:32:34.698472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:21:53.005 [2024-10-07 11:32:34.698492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.263 [2024-10-07 11:32:34.748693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.263 [2024-10-07 11:32:34.748755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:53.263 [2024-10-07 11:32:34.748770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.210 ms 00:21:53.263 [2024-10-07 11:32:34.748784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.263 [2024-10-07 11:32:34.748838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.263 [2024-10-07 11:32:34.748854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:53.263 [2024-10-07 11:32:34.748865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:53.263 [2024-10-07 11:32:34.748881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.263 [2024-10-07 11:32:34.749422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.263 [2024-10-07 11:32:34.749449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:53.263 [2024-10-07 11:32:34.749461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.437 ms 00:21:53.263 [2024-10-07 11:32:34.749475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.263 [2024-10-07 11:32:34.749612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.263 [2024-10-07 11:32:34.749630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:53.263 [2024-10-07 11:32:34.749641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:21:53.263 [2024-10-07 11:32:34.749658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.263 [2024-10-07 11:32:34.771766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.263 [2024-10-07 11:32:34.771823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:53.263 [2024-10-07 11:32:34.771840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.108 ms 00:21:53.263 [2024-10-07 11:32:34.771857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.263 [2024-10-07 11:32:34.787500] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:53.263 [2024-10-07 11:32:34.804233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.263 [2024-10-07 11:32:34.804284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:53.263 [2024-10-07 11:32:34.804304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.279 ms 00:21:53.263 [2024-10-07 11:32:34.804315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.263 [2024-10-07 11:32:34.899001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.263 [2024-10-07 11:32:34.899064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:53.263 [2024-10-07 11:32:34.899084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 94.777 ms 00:21:53.263 [2024-10-07 11:32:34.899096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.263 [2024-10-07 11:32:34.899337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.263 [2024-10-07 11:32:34.899360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:53.263 [2024-10-07 11:32:34.899378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.184 ms 00:21:53.263 [2024-10-07 11:32:34.899393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.263 [2024-10-07 11:32:34.936927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.263 [2024-10-07 11:32:34.936978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:53.263 [2024-10-07 11:32:34.936997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.520 ms 00:21:53.263 [2024-10-07 11:32:34.937008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.522 [2024-10-07 11:32:34.975134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.522 [2024-10-07 11:32:34.975199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:53.522 [2024-10-07 11:32:34.975220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.125 ms 00:21:53.522 [2024-10-07 11:32:34.975231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.522 [2024-10-07 11:32:34.976074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.522 [2024-10-07 11:32:34.976105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:53.522 [2024-10-07 11:32:34.976120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.743 ms 00:21:53.522 [2024-10-07 11:32:34.976131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.522 [2024-10-07 11:32:35.103597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.522 [2024-10-07 11:32:35.103662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:53.522 [2024-10-07 11:32:35.103688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 127.575 ms 00:21:53.522 [2024-10-07 11:32:35.103699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.522 [2024-10-07 11:32:35.142093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.522 [2024-10-07 11:32:35.142157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:53.522 [2024-10-07 11:32:35.142182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.335 ms 00:21:53.522 [2024-10-07 11:32:35.142193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.522 [2024-10-07 11:32:35.180584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.522 [2024-10-07 11:32:35.180648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:21:53.522 [2024-10-07 11:32:35.180667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.381 ms 00:21:53.522 [2024-10-07 11:32:35.180678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.522 [2024-10-07 11:32:35.217311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.522 [2024-10-07 11:32:35.217356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:53.522 [2024-10-07 11:32:35.217374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.627 ms 00:21:53.522 [2024-10-07 11:32:35.217384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.522 [2024-10-07 11:32:35.217445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.522 [2024-10-07 11:32:35.217458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:53.522 [2024-10-07 11:32:35.217475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:53.522 [2024-10-07 11:32:35.217485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.522 [2024-10-07 11:32:35.217648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.522 [2024-10-07 11:32:35.217669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:53.522 [2024-10-07 11:32:35.217684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:21:53.522 [2024-10-07 11:32:35.217694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.522 [2024-10-07 11:32:35.218832] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 6044.186 ms, result 0 00:21:53.522 { 00:21:53.522 "name": "ftl0", 00:21:53.522 "uuid": "105a4659-e98c-4c6a-8ba5-b73e9fdeb412" 00:21:53.522 } 00:21:53.780 11:32:35 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:21:53.780 11:32:35 ftl.ftl_fio_basic -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:21:53.780 11:32:35 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:53.780 11:32:35 ftl.ftl_fio_basic -- common/autotest_common.sh@901 -- # local i 00:21:53.780 11:32:35 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:53.780 11:32:35 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:53.780 11:32:35 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:21:53.780 11:32:35 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:21:54.038 [ 00:21:54.038 { 00:21:54.038 "name": "ftl0", 00:21:54.038 "aliases": [ 00:21:54.038 "105a4659-e98c-4c6a-8ba5-b73e9fdeb412" 00:21:54.038 ], 00:21:54.038 "product_name": "FTL disk", 00:21:54.038 "block_size": 4096, 00:21:54.038 "num_blocks": 20971520, 00:21:54.038 "uuid": "105a4659-e98c-4c6a-8ba5-b73e9fdeb412", 00:21:54.038 "assigned_rate_limits": { 00:21:54.038 "rw_ios_per_sec": 0, 00:21:54.038 "rw_mbytes_per_sec": 0, 00:21:54.038 "r_mbytes_per_sec": 0, 00:21:54.038 "w_mbytes_per_sec": 0 00:21:54.038 }, 00:21:54.038 "claimed": false, 00:21:54.038 "zoned": false, 00:21:54.038 "supported_io_types": { 00:21:54.038 "read": true, 00:21:54.038 "write": true, 00:21:54.038 "unmap": true, 00:21:54.038 "flush": true, 00:21:54.038 "reset": false, 00:21:54.038 "nvme_admin": false, 00:21:54.038 "nvme_io": false, 00:21:54.038 "nvme_io_md": false, 00:21:54.038 "write_zeroes": true, 00:21:54.038 "zcopy": false, 00:21:54.038 "get_zone_info": false, 00:21:54.038 "zone_management": false, 00:21:54.038 "zone_append": false, 00:21:54.038 "compare": false, 00:21:54.038 "compare_and_write": false, 00:21:54.038 "abort": false, 00:21:54.038 "seek_hole": false, 00:21:54.038 "seek_data": false, 00:21:54.038 "copy": false, 00:21:54.038 "nvme_iov_md": false 00:21:54.038 }, 00:21:54.038 "driver_specific": { 00:21:54.038 "ftl": { 00:21:54.038 "base_bdev": "afedc9af-9a4a-4b13-a3a2-ac298999c7b8", 00:21:54.038 "cache": "nvc0n1p0" 00:21:54.038 } 00:21:54.038 } 00:21:54.038 } 00:21:54.038 ] 00:21:54.038 11:32:35 ftl.ftl_fio_basic -- common/autotest_common.sh@907 -- # return 0 00:21:54.038 11:32:35 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:21:54.038 11:32:35 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:21:54.295 11:32:35 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:21:54.295 11:32:35 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:21:54.553 [2024-10-07 11:32:36.106221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.553 [2024-10-07 11:32:36.106290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:54.553 [2024-10-07 11:32:36.106306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:54.553 [2024-10-07 11:32:36.106320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.553 [2024-10-07 11:32:36.106361] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:54.553 [2024-10-07 11:32:36.110518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.553 [2024-10-07 11:32:36.110554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:54.553 [2024-10-07 11:32:36.110570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.135 ms 00:21:54.553 [2024-10-07 11:32:36.110580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.553 [2024-10-07 11:32:36.111066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.553 [2024-10-07 11:32:36.111095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:54.553 [2024-10-07 11:32:36.111110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.443 ms 00:21:54.553 [2024-10-07 11:32:36.111120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.553 [2024-10-07 11:32:36.113630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.553 [2024-10-07 11:32:36.113654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:54.553 [2024-10-07 11:32:36.113671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.483 ms 00:21:54.553 [2024-10-07 11:32:36.113681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.553 [2024-10-07 11:32:36.118782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.553 [2024-10-07 11:32:36.118819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:54.553 [2024-10-07 11:32:36.118834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.075 ms 00:21:54.553 [2024-10-07 11:32:36.118845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.553 [2024-10-07 11:32:36.156120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.553 [2024-10-07 11:32:36.156175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:54.553 [2024-10-07 11:32:36.156195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.256 ms 00:21:54.553 [2024-10-07 11:32:36.156206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.553 [2024-10-07 11:32:36.179073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.553 [2024-10-07 11:32:36.179142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:54.553 [2024-10-07 11:32:36.179162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.832 ms 00:21:54.553 [2024-10-07 11:32:36.179173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.553 [2024-10-07 11:32:36.179446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.553 [2024-10-07 11:32:36.179460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:54.553 [2024-10-07 11:32:36.179475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.187 ms 00:21:54.553 [2024-10-07 11:32:36.179486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.553 [2024-10-07 11:32:36.218638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.553 [2024-10-07 11:32:36.218980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:54.553 [2024-10-07 11:32:36.219020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.169 ms 00:21:54.553 [2024-10-07 11:32:36.219032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.553 [2024-10-07 11:32:36.258568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.553 [2024-10-07 11:32:36.258826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:54.553 [2024-10-07 11:32:36.258861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.512 ms 00:21:54.553 [2024-10-07 11:32:36.258872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.811 [2024-10-07 11:32:36.294219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.811 [2024-10-07 11:32:36.294376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:54.811 [2024-10-07 11:32:36.294404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.321 ms 00:21:54.811 [2024-10-07 11:32:36.294414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.811 [2024-10-07 11:32:36.330039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.811 [2024-10-07 11:32:36.330092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:54.811 [2024-10-07 11:32:36.330110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.544 ms 00:21:54.811 [2024-10-07 11:32:36.330120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.811 [2024-10-07 11:32:36.330172] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:54.811 [2024-10-07 11:32:36.330190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:54.811 [2024-10-07 11:32:36.330209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:54.811 [2024-10-07 11:32:36.330221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:54.811 [2024-10-07 11:32:36.330242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:54.811 [2024-10-07 11:32:36.330253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:54.811 [2024-10-07 11:32:36.330266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:54.811 [2024-10-07 11:32:36.330277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.330294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.330305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.330319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.330330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.330343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.330354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.330368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.330379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.330392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.330403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.330416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.330427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.330444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.330455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.330469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.330479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.330495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.330506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.330520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.330532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.330545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.330557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.330574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.330586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.330600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.330611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.330625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.330636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.330649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.330660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.330673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.330685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.330700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.330711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.330724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.330735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.330764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.330776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.330791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.330802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.330816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.330826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.330840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.330851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.330864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.330875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.330888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.330899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.330915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.330926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.330940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.330950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.330974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.330986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.331001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.331011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.331025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.331036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.331048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.331059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.331073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.331083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.331097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.331108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.331126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.331136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.331149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.331160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.331173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.331184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.331197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.331208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.331221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.331232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.331245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.331255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.331268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.331280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.331292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.331304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.331337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.331347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.331360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.331371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.331384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.331395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.331409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.331420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:54.812 [2024-10-07 11:32:36.331434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:54.813 [2024-10-07 11:32:36.331446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:54.813 [2024-10-07 11:32:36.331461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:54.813 [2024-10-07 11:32:36.331472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:54.813 [2024-10-07 11:32:36.331485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:54.813 [2024-10-07 11:32:36.331504] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:54.813 [2024-10-07 11:32:36.331517] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 105a4659-e98c-4c6a-8ba5-b73e9fdeb412 00:21:54.813 [2024-10-07 11:32:36.331528] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:54.813 [2024-10-07 11:32:36.331543] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:54.813 [2024-10-07 11:32:36.331553] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:54.813 [2024-10-07 11:32:36.331566] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:54.813 [2024-10-07 11:32:36.331575] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:54.813 [2024-10-07 11:32:36.331590] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:54.813 [2024-10-07 11:32:36.331601] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:54.813 [2024-10-07 11:32:36.331612] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:54.813 [2024-10-07 11:32:36.331621] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:54.813 [2024-10-07 11:32:36.331633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.813 [2024-10-07 11:32:36.331644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:54.813 [2024-10-07 11:32:36.331657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.466 ms 00:21:54.813 [2024-10-07 11:32:36.331670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.813 [2024-10-07 11:32:36.352042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.813 [2024-10-07 11:32:36.352079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:54.813 [2024-10-07 11:32:36.352096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.335 ms 00:21:54.813 [2024-10-07 11:32:36.352106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.813 [2024-10-07 11:32:36.352649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.813 [2024-10-07 11:32:36.352664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:54.813 [2024-10-07 11:32:36.352681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.500 ms 00:21:54.813 [2024-10-07 11:32:36.352691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.813 [2024-10-07 11:32:36.422524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:54.813 [2024-10-07 11:32:36.422575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:54.813 [2024-10-07 11:32:36.422594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:54.813 [2024-10-07 11:32:36.422605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.813 [2024-10-07 11:32:36.422681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:54.813 [2024-10-07 11:32:36.422693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:54.813 [2024-10-07 11:32:36.422710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:54.813 [2024-10-07 11:32:36.422720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.813 [2024-10-07 11:32:36.422874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:54.813 [2024-10-07 11:32:36.422889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:54.813 [2024-10-07 11:32:36.422903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:54.813 [2024-10-07 11:32:36.422914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.813 [2024-10-07 11:32:36.422966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:54.813 [2024-10-07 11:32:36.422978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:54.813 [2024-10-07 11:32:36.422991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:54.813 [2024-10-07 11:32:36.423005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.100 [2024-10-07 11:32:36.558700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.100 [2024-10-07 11:32:36.558771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:55.100 [2024-10-07 11:32:36.558789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.100 [2024-10-07 11:32:36.558800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.100 [2024-10-07 11:32:36.663903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.100 [2024-10-07 11:32:36.663967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:55.100 [2024-10-07 11:32:36.663989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.100 [2024-10-07 11:32:36.664000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.100 [2024-10-07 11:32:36.664127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.100 [2024-10-07 11:32:36.664140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:55.100 [2024-10-07 11:32:36.664153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.100 [2024-10-07 11:32:36.664163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.100 [2024-10-07 11:32:36.664247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.100 [2024-10-07 11:32:36.664260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:55.100 [2024-10-07 11:32:36.664273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.100 [2024-10-07 11:32:36.664283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.100 [2024-10-07 11:32:36.664427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.100 [2024-10-07 11:32:36.664441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:55.100 [2024-10-07 11:32:36.664455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.100 [2024-10-07 11:32:36.664465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.100 [2024-10-07 11:32:36.664520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.100 [2024-10-07 11:32:36.664533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:55.100 [2024-10-07 11:32:36.664545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.100 [2024-10-07 11:32:36.664556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.100 [2024-10-07 11:32:36.664609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.100 [2024-10-07 11:32:36.664623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:55.100 [2024-10-07 11:32:36.664637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.100 [2024-10-07 11:32:36.664648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.100 [2024-10-07 11:32:36.664712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.100 [2024-10-07 11:32:36.664724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:55.100 [2024-10-07 11:32:36.664737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.100 [2024-10-07 11:32:36.664773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.100 [2024-10-07 11:32:36.664946] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 559.609 ms, result 0 00:21:55.100 true 00:21:55.100 11:32:36 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 74853 00:21:55.100 11:32:36 ftl.ftl_fio_basic -- common/autotest_common.sh@950 -- # '[' -z 74853 ']' 00:21:55.100 11:32:36 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # kill -0 74853 00:21:55.100 11:32:36 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # uname 00:21:55.100 11:32:36 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:55.100 11:32:36 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74853 00:21:55.100 killing process with pid 74853 00:21:55.100 11:32:36 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:55.100 11:32:36 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:55.100 11:32:36 ftl.ftl_fio_basic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74853' 00:21:55.100 11:32:36 ftl.ftl_fio_basic -- common/autotest_common.sh@969 -- # kill 74853 00:21:55.100 11:32:36 ftl.ftl_fio_basic -- common/autotest_common.sh@974 -- # wait 74853 00:22:00.370 11:32:41 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:22:00.370 11:32:41 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:22:00.370 11:32:41 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:22:00.370 11:32:41 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:00.370 11:32:41 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:22:00.370 11:32:41 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:22:00.370 11:32:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:22:00.370 11:32:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:00.370 11:32:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:00.370 11:32:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:00.370 11:32:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:00.370 11:32:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:22:00.370 11:32:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:00.370 11:32:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:00.370 11:32:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:22:00.370 11:32:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:00.370 11:32:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:00.370 11:32:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:00.370 11:32:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:00.370 11:32:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:22:00.370 11:32:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:00.370 11:32:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:22:00.370 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:22:00.370 fio-3.35 00:22:00.370 Starting 1 thread 00:22:05.636 00:22:05.636 test: (groupid=0, jobs=1): err= 0: pid=75097: Mon Oct 7 11:32:47 2024 00:22:05.636 read: IOPS=978, BW=64.9MiB/s (68.1MB/s)(255MiB/3919msec) 00:22:05.636 slat (nsec): min=4246, max=30139, avg=6334.13, stdev=2746.43 00:22:05.636 clat (usec): min=298, max=1236, avg=463.17, stdev=62.55 00:22:05.636 lat (usec): min=310, max=1249, avg=469.50, stdev=62.85 00:22:05.636 clat percentiles (usec): 00:22:05.636 | 1.00th=[ 322], 5.00th=[ 383], 10.00th=[ 388], 20.00th=[ 400], 00:22:05.636 | 30.00th=[ 449], 40.00th=[ 453], 50.00th=[ 457], 60.00th=[ 465], 00:22:05.636 | 70.00th=[ 502], 80.00th=[ 523], 90.00th=[ 529], 95.00th=[ 545], 00:22:05.636 | 99.00th=[ 611], 99.50th=[ 668], 99.90th=[ 840], 99.95th=[ 1020], 00:22:05.636 | 99.99th=[ 1237] 00:22:05.636 write: IOPS=984, BW=65.4MiB/s (68.6MB/s)(256MiB/3915msec); 0 zone resets 00:22:05.636 slat (nsec): min=15284, max=78943, avg=19548.82, stdev=4635.05 00:22:05.636 clat (usec): min=349, max=1297, avg=519.32, stdev=77.24 00:22:05.636 lat (usec): min=365, max=1330, avg=538.87, stdev=77.64 00:22:05.636 clat percentiles (usec): 00:22:05.636 | 1.00th=[ 404], 5.00th=[ 412], 10.00th=[ 429], 20.00th=[ 474], 00:22:05.636 | 30.00th=[ 478], 40.00th=[ 486], 50.00th=[ 515], 60.00th=[ 537], 00:22:05.636 | 70.00th=[ 545], 80.00th=[ 562], 90.00th=[ 603], 95.00th=[ 619], 00:22:05.636 | 99.00th=[ 816], 99.50th=[ 889], 99.90th=[ 1172], 99.95th=[ 1270], 00:22:05.636 | 99.99th=[ 1303] 00:22:05.636 bw ( KiB/s): min=62968, max=69768, per=99.70%, avg=66776.00, stdev=2476.79, samples=7 00:22:05.636 iops : min= 926, max= 1026, avg=982.00, stdev=36.42, samples=7 00:22:05.636 lat (usec) : 500=58.58%, 750=40.58%, 1000=0.69% 00:22:05.636 lat (msec) : 2=0.16% 00:22:05.637 cpu : usr=99.21%, sys=0.10%, ctx=9, majf=0, minf=1169 00:22:05.637 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:05.637 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:05.637 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:05.637 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:05.637 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:05.637 00:22:05.637 Run status group 0 (all jobs): 00:22:05.637 READ: bw=64.9MiB/s (68.1MB/s), 64.9MiB/s-64.9MiB/s (68.1MB/s-68.1MB/s), io=255MiB (267MB), run=3919-3919msec 00:22:05.637 WRITE: bw=65.4MiB/s (68.6MB/s), 65.4MiB/s-65.4MiB/s (68.6MB/s-68.6MB/s), io=256MiB (269MB), run=3915-3915msec 00:22:07.581 ----------------------------------------------------- 00:22:07.581 Suppressions used: 00:22:07.581 count bytes template 00:22:07.581 1 5 /usr/src/fio/parse.c 00:22:07.581 1 8 libtcmalloc_minimal.so 00:22:07.581 1 904 libcrypto.so 00:22:07.581 ----------------------------------------------------- 00:22:07.581 00:22:07.581 11:32:49 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:22:07.581 11:32:49 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:07.581 11:32:49 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:22:07.839 11:32:49 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:22:07.839 11:32:49 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:22:07.839 11:32:49 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:07.839 11:32:49 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:22:07.839 11:32:49 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:22:07.839 11:32:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:22:07.840 11:32:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:07.840 11:32:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:07.840 11:32:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:07.840 11:32:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:07.840 11:32:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:22:07.840 11:32:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:07.840 11:32:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:07.840 11:32:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:07.840 11:32:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:22:07.840 11:32:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:07.840 11:32:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:07.840 11:32:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:07.840 11:32:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:22:07.840 11:32:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:07.840 11:32:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:22:07.840 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:22:07.840 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:22:07.840 fio-3.35 00:22:07.840 Starting 2 threads 00:22:34.432 00:22:34.432 first_half: (groupid=0, jobs=1): err= 0: pid=75200: Mon Oct 7 11:33:16 2024 00:22:34.432 read: IOPS=2589, BW=10.1MiB/s (10.6MB/s)(255MiB/25199msec) 00:22:34.432 slat (nsec): min=3461, max=52425, avg=6082.68, stdev=2170.51 00:22:34.432 clat (usec): min=878, max=271023, avg=36562.00, stdev=19268.11 00:22:34.432 lat (usec): min=884, max=271030, avg=36568.08, stdev=19268.30 00:22:34.432 clat percentiles (msec): 00:22:34.432 | 1.00th=[ 9], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:22:34.432 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:22:34.432 | 70.00th=[ 34], 80.00th=[ 36], 90.00th=[ 40], 95.00th=[ 47], 00:22:34.432 | 99.00th=[ 148], 99.50th=[ 171], 99.90th=[ 209], 99.95th=[ 230], 00:22:34.432 | 99.99th=[ 264] 00:22:34.432 write: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(256MiB/21486msec); 0 zone resets 00:22:34.432 slat (usec): min=4, max=612, avg= 8.49, stdev= 6.35 00:22:34.432 clat (usec): min=380, max=116667, avg=12756.72, stdev=21060.91 00:22:34.432 lat (usec): min=396, max=116674, avg=12765.21, stdev=21061.09 00:22:34.432 clat percentiles (usec): 00:22:34.432 | 1.00th=[ 930], 5.00th=[ 1205], 10.00th=[ 1483], 20.00th=[ 1926], 00:22:34.432 | 30.00th=[ 3228], 40.00th=[ 4948], 50.00th=[ 5932], 60.00th=[ 6783], 00:22:34.432 | 70.00th=[ 8455], 80.00th=[ 12518], 90.00th=[ 32375], 95.00th=[ 76022], 00:22:34.432 | 99.00th=[ 88605], 99.50th=[ 98042], 99.90th=[110625], 99.95th=[113771], 00:22:34.432 | 99.99th=[115868] 00:22:34.432 bw ( KiB/s): min= 840, max=40016, per=85.94%, avg=20971.52, stdev=11593.17, samples=25 00:22:34.432 iops : min= 210, max=10004, avg=5242.88, stdev=2898.29, samples=25 00:22:34.432 lat (usec) : 500=0.01%, 750=0.09%, 1000=0.74% 00:22:34.432 lat (msec) : 2=9.99%, 4=6.86%, 10=20.85%, 20=7.42%, 50=47.68% 00:22:34.432 lat (msec) : 100=5.03%, 250=1.33%, 500=0.01% 00:22:34.432 cpu : usr=99.10%, sys=0.18%, ctx=44, majf=0, minf=5617 00:22:34.432 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:22:34.432 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:34.432 complete : 0=0.0%, 4=99.9%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:34.432 issued rwts: total=65245,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:34.432 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:34.432 second_half: (groupid=0, jobs=1): err= 0: pid=75201: Mon Oct 7 11:33:16 2024 00:22:34.432 read: IOPS=2599, BW=10.2MiB/s (10.6MB/s)(255MiB/25076msec) 00:22:34.432 slat (nsec): min=3477, max=34759, avg=5995.35, stdev=1962.57 00:22:34.433 clat (usec): min=856, max=278429, avg=37252.10, stdev=19018.81 00:22:34.433 lat (usec): min=862, max=278436, avg=37258.10, stdev=19019.00 00:22:34.433 clat percentiles (msec): 00:22:34.433 | 1.00th=[ 7], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:22:34.433 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:22:34.433 | 70.00th=[ 34], 80.00th=[ 37], 90.00th=[ 40], 95.00th=[ 52], 00:22:34.433 | 99.00th=[ 150], 99.50th=[ 163], 99.90th=[ 190], 99.95th=[ 199], 00:22:34.433 | 99.99th=[ 271] 00:22:34.433 write: IOPS=3351, BW=13.1MiB/s (13.7MB/s)(256MiB/19556msec); 0 zone resets 00:22:34.433 slat (usec): min=4, max=2950, avg= 8.22, stdev=13.30 00:22:34.433 clat (usec): min=409, max=117481, avg=11901.74, stdev=20926.57 00:22:34.433 lat (usec): min=423, max=117491, avg=11909.96, stdev=20926.70 00:22:34.433 clat percentiles (usec): 00:22:34.433 | 1.00th=[ 979], 5.00th=[ 1287], 10.00th=[ 1516], 20.00th=[ 1762], 00:22:34.433 | 30.00th=[ 2114], 40.00th=[ 3589], 50.00th=[ 5014], 60.00th=[ 6194], 00:22:34.433 | 70.00th=[ 7832], 80.00th=[ 11731], 90.00th=[ 27132], 95.00th=[ 76022], 00:22:34.433 | 99.00th=[ 89654], 99.50th=[100140], 99.90th=[110625], 99.95th=[113771], 00:22:34.433 | 99.99th=[115868] 00:22:34.433 bw ( KiB/s): min= 984, max=41696, per=93.43%, avg=22797.91, stdev=12381.37, samples=23 00:22:34.433 iops : min= 246, max=10424, avg=5699.48, stdev=3095.34, samples=23 00:22:34.433 lat (usec) : 500=0.01%, 750=0.09%, 1000=0.48% 00:22:34.433 lat (msec) : 2=13.56%, 4=8.17%, 10=16.02%, 20=7.51%, 50=47.62% 00:22:34.433 lat (msec) : 100=5.00%, 250=1.53%, 500=0.01% 00:22:34.433 cpu : usr=99.23%, sys=0.22%, ctx=52, majf=0, minf=5508 00:22:34.433 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:22:34.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:34.433 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:34.433 issued rwts: total=65182,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:34.433 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:34.433 00:22:34.433 Run status group 0 (all jobs): 00:22:34.433 READ: bw=20.2MiB/s (21.2MB/s), 10.1MiB/s-10.2MiB/s (10.6MB/s-10.6MB/s), io=509MiB (534MB), run=25076-25199msec 00:22:34.433 WRITE: bw=23.8MiB/s (25.0MB/s), 11.9MiB/s-13.1MiB/s (12.5MB/s-13.7MB/s), io=512MiB (537MB), run=19556-21486msec 00:22:37.018 ----------------------------------------------------- 00:22:37.019 Suppressions used: 00:22:37.019 count bytes template 00:22:37.019 2 10 /usr/src/fio/parse.c 00:22:37.019 3 288 /usr/src/fio/iolog.c 00:22:37.019 1 8 libtcmalloc_minimal.so 00:22:37.019 1 904 libcrypto.so 00:22:37.019 ----------------------------------------------------- 00:22:37.019 00:22:37.019 11:33:18 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:22:37.019 11:33:18 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:37.019 11:33:18 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:22:37.019 11:33:18 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:22:37.019 11:33:18 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:22:37.019 11:33:18 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:37.019 11:33:18 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:22:37.019 11:33:18 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:22:37.019 11:33:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:22:37.019 11:33:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:37.019 11:33:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:37.019 11:33:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:37.019 11:33:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:37.019 11:33:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:22:37.019 11:33:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:37.019 11:33:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:37.019 11:33:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:37.019 11:33:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:37.019 11:33:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:22:37.277 11:33:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:37.277 11:33:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:37.277 11:33:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:22:37.277 11:33:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:37.277 11:33:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:22:37.277 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:22:37.277 fio-3.35 00:22:37.277 Starting 1 thread 00:22:52.178 00:22:52.178 test: (groupid=0, jobs=1): err= 0: pid=75536: Mon Oct 7 11:33:33 2024 00:22:52.178 read: IOPS=7618, BW=29.8MiB/s (31.2MB/s)(255MiB/8558msec) 00:22:52.178 slat (nsec): min=3458, max=32912, avg=5366.17, stdev=1841.21 00:22:52.178 clat (usec): min=631, max=36927, avg=16790.21, stdev=1118.72 00:22:52.178 lat (usec): min=635, max=36931, avg=16795.57, stdev=1118.71 00:22:52.178 clat percentiles (usec): 00:22:52.178 | 1.00th=[15795], 5.00th=[15926], 10.00th=[16057], 20.00th=[16188], 00:22:52.178 | 30.00th=[16319], 40.00th=[16450], 50.00th=[16581], 60.00th=[16712], 00:22:52.178 | 70.00th=[16909], 80.00th=[17171], 90.00th=[17957], 95.00th=[18482], 00:22:52.178 | 99.00th=[19792], 99.50th=[20317], 99.90th=[28443], 99.95th=[32637], 00:22:52.178 | 99.99th=[35914] 00:22:52.178 write: IOPS=13.3k, BW=51.9MiB/s (54.4MB/s)(256MiB/4930msec); 0 zone resets 00:22:52.178 slat (usec): min=4, max=1080, avg= 7.70, stdev= 7.51 00:22:52.178 clat (usec): min=607, max=59628, avg=9581.75, stdev=11757.51 00:22:52.178 lat (usec): min=615, max=59636, avg=9589.46, stdev=11757.52 00:22:52.178 clat percentiles (usec): 00:22:52.178 | 1.00th=[ 971], 5.00th=[ 1156], 10.00th=[ 1303], 20.00th=[ 1483], 00:22:52.178 | 30.00th=[ 1663], 40.00th=[ 2024], 50.00th=[ 6325], 60.00th=[ 7373], 00:22:52.178 | 70.00th=[ 8356], 80.00th=[10159], 90.00th=[34866], 95.00th=[36439], 00:22:52.178 | 99.00th=[38536], 99.50th=[41157], 99.90th=[53740], 99.95th=[54264], 00:22:52.178 | 99.99th=[55837] 00:22:52.178 bw ( KiB/s): min=38936, max=72224, per=98.60%, avg=52428.80, stdev=9986.40, samples=10 00:22:52.178 iops : min= 9734, max=18056, avg=13107.20, stdev=2496.60, samples=10 00:22:52.178 lat (usec) : 750=0.02%, 1000=0.71% 00:22:52.178 lat (msec) : 2=19.23%, 4=1.14%, 10=18.65%, 20=51.92%, 50=8.20% 00:22:52.178 lat (msec) : 100=0.14% 00:22:52.178 cpu : usr=98.96%, sys=0.34%, ctx=25, majf=0, minf=5565 00:22:52.178 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:22:52.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:52.178 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:52.178 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:52.178 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:52.178 00:22:52.178 Run status group 0 (all jobs): 00:22:52.178 READ: bw=29.8MiB/s (31.2MB/s), 29.8MiB/s-29.8MiB/s (31.2MB/s-31.2MB/s), io=255MiB (267MB), run=8558-8558msec 00:22:52.178 WRITE: bw=51.9MiB/s (54.4MB/s), 51.9MiB/s-51.9MiB/s (54.4MB/s-54.4MB/s), io=256MiB (268MB), run=4930-4930msec 00:22:54.707 ----------------------------------------------------- 00:22:54.707 Suppressions used: 00:22:54.707 count bytes template 00:22:54.707 1 5 /usr/src/fio/parse.c 00:22:54.707 2 192 /usr/src/fio/iolog.c 00:22:54.707 1 8 libtcmalloc_minimal.so 00:22:54.707 1 904 libcrypto.so 00:22:54.707 ----------------------------------------------------- 00:22:54.707 00:22:54.707 11:33:35 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:22:54.707 11:33:35 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:54.707 11:33:35 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:22:54.707 11:33:36 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:54.707 11:33:36 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:22:54.707 11:33:36 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:22:54.707 Remove shared memory files 00:22:54.707 11:33:36 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:22:54.707 11:33:36 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:22:54.707 11:33:36 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid58251 /dev/shm/spdk_tgt_trace.pid73746 00:22:54.707 11:33:36 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:22:54.707 11:33:36 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:22:54.707 ************************************ 00:22:54.707 END TEST ftl_fio_basic 00:22:54.707 ************************************ 00:22:54.707 00:22:54.707 real 1m11.356s 00:22:54.707 user 2m35.619s 00:22:54.707 sys 0m4.044s 00:22:54.707 11:33:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:54.707 11:33:36 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:22:54.707 11:33:36 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:22:54.707 11:33:36 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:22:54.707 11:33:36 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:54.707 11:33:36 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:54.707 ************************************ 00:22:54.707 START TEST ftl_bdevperf 00:22:54.708 ************************************ 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:22:54.708 * Looking for test storage... 00:22:54.708 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # lcov --version 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:54.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:54.708 --rc genhtml_branch_coverage=1 00:22:54.708 --rc genhtml_function_coverage=1 00:22:54.708 --rc genhtml_legend=1 00:22:54.708 --rc geninfo_all_blocks=1 00:22:54.708 --rc geninfo_unexecuted_blocks=1 00:22:54.708 00:22:54.708 ' 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:54.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:54.708 --rc genhtml_branch_coverage=1 00:22:54.708 --rc genhtml_function_coverage=1 00:22:54.708 --rc genhtml_legend=1 00:22:54.708 --rc geninfo_all_blocks=1 00:22:54.708 --rc geninfo_unexecuted_blocks=1 00:22:54.708 00:22:54.708 ' 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:54.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:54.708 --rc genhtml_branch_coverage=1 00:22:54.708 --rc genhtml_function_coverage=1 00:22:54.708 --rc genhtml_legend=1 00:22:54.708 --rc geninfo_all_blocks=1 00:22:54.708 --rc geninfo_unexecuted_blocks=1 00:22:54.708 00:22:54.708 ' 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:54.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:54.708 --rc genhtml_branch_coverage=1 00:22:54.708 --rc genhtml_function_coverage=1 00:22:54.708 --rc genhtml_legend=1 00:22:54.708 --rc geninfo_all_blocks=1 00:22:54.708 --rc geninfo_unexecuted_blocks=1 00:22:54.708 00:22:54.708 ' 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=75780 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 75780 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 75780 ']' 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:54.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:54.708 11:33:36 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:54.967 [2024-10-07 11:33:36.507312] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:22:54.967 [2024-10-07 11:33:36.507628] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75780 ] 00:22:55.226 [2024-10-07 11:33:36.680297] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:55.226 [2024-10-07 11:33:36.900070] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:22:55.793 11:33:37 ftl.ftl_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:55.793 11:33:37 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:22:55.793 11:33:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:22:55.793 11:33:37 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:22:55.793 11:33:37 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:22:55.793 11:33:37 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:22:55.793 11:33:37 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:22:55.793 11:33:37 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:22:56.051 11:33:37 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:56.051 11:33:37 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:22:56.051 11:33:37 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:56.051 11:33:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:22:56.051 11:33:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:56.051 11:33:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:22:56.051 11:33:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:22:56.051 11:33:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:56.309 11:33:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:56.309 { 00:22:56.309 "name": "nvme0n1", 00:22:56.309 "aliases": [ 00:22:56.309 "415de97c-1518-40af-9632-9720ae59761c" 00:22:56.309 ], 00:22:56.309 "product_name": "NVMe disk", 00:22:56.309 "block_size": 4096, 00:22:56.309 "num_blocks": 1310720, 00:22:56.309 "uuid": "415de97c-1518-40af-9632-9720ae59761c", 00:22:56.309 "numa_id": -1, 00:22:56.309 "assigned_rate_limits": { 00:22:56.309 "rw_ios_per_sec": 0, 00:22:56.309 "rw_mbytes_per_sec": 0, 00:22:56.309 "r_mbytes_per_sec": 0, 00:22:56.309 "w_mbytes_per_sec": 0 00:22:56.309 }, 00:22:56.309 "claimed": true, 00:22:56.309 "claim_type": "read_many_write_one", 00:22:56.309 "zoned": false, 00:22:56.309 "supported_io_types": { 00:22:56.309 "read": true, 00:22:56.309 "write": true, 00:22:56.309 "unmap": true, 00:22:56.309 "flush": true, 00:22:56.309 "reset": true, 00:22:56.309 "nvme_admin": true, 00:22:56.309 "nvme_io": true, 00:22:56.309 "nvme_io_md": false, 00:22:56.309 "write_zeroes": true, 00:22:56.309 "zcopy": false, 00:22:56.309 "get_zone_info": false, 00:22:56.309 "zone_management": false, 00:22:56.309 "zone_append": false, 00:22:56.309 "compare": true, 00:22:56.309 "compare_and_write": false, 00:22:56.309 "abort": true, 00:22:56.309 "seek_hole": false, 00:22:56.309 "seek_data": false, 00:22:56.309 "copy": true, 00:22:56.309 "nvme_iov_md": false 00:22:56.309 }, 00:22:56.309 "driver_specific": { 00:22:56.309 "nvme": [ 00:22:56.309 { 00:22:56.309 "pci_address": "0000:00:11.0", 00:22:56.309 "trid": { 00:22:56.309 "trtype": "PCIe", 00:22:56.309 "traddr": "0000:00:11.0" 00:22:56.309 }, 00:22:56.309 "ctrlr_data": { 00:22:56.309 "cntlid": 0, 00:22:56.309 "vendor_id": "0x1b36", 00:22:56.309 "model_number": "QEMU NVMe Ctrl", 00:22:56.309 "serial_number": "12341", 00:22:56.309 "firmware_revision": "8.0.0", 00:22:56.309 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:56.309 "oacs": { 00:22:56.309 "security": 0, 00:22:56.309 "format": 1, 00:22:56.309 "firmware": 0, 00:22:56.309 "ns_manage": 1 00:22:56.309 }, 00:22:56.309 "multi_ctrlr": false, 00:22:56.309 "ana_reporting": false 00:22:56.309 }, 00:22:56.309 "vs": { 00:22:56.309 "nvme_version": "1.4" 00:22:56.309 }, 00:22:56.309 "ns_data": { 00:22:56.309 "id": 1, 00:22:56.309 "can_share": false 00:22:56.309 } 00:22:56.309 } 00:22:56.309 ], 00:22:56.309 "mp_policy": "active_passive" 00:22:56.309 } 00:22:56.309 } 00:22:56.309 ]' 00:22:56.309 11:33:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:56.309 11:33:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:22:56.309 11:33:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:56.309 11:33:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=1310720 00:22:56.309 11:33:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:22:56.309 11:33:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 5120 00:22:56.309 11:33:38 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:22:56.309 11:33:38 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:56.309 11:33:38 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:22:56.309 11:33:38 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:56.579 11:33:38 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:56.579 11:33:38 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=efc7a176-4bb5-4321-980c-959951fbec96 00:22:56.579 11:33:38 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:22:56.579 11:33:38 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u efc7a176-4bb5-4321-980c-959951fbec96 00:22:56.842 11:33:38 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:57.102 11:33:38 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=d508d576-60a3-47a3-818d-52c57286af2b 00:22:57.102 11:33:38 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u d508d576-60a3-47a3-818d-52c57286af2b 00:22:57.359 11:33:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=4c1506f8-d832-4260-8aec-40f643cdd381 00:22:57.359 11:33:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 4c1506f8-d832-4260-8aec-40f643cdd381 00:22:57.359 11:33:38 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:22:57.359 11:33:38 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:22:57.359 11:33:38 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=4c1506f8-d832-4260-8aec-40f643cdd381 00:22:57.359 11:33:38 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:22:57.359 11:33:38 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 4c1506f8-d832-4260-8aec-40f643cdd381 00:22:57.359 11:33:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=4c1506f8-d832-4260-8aec-40f643cdd381 00:22:57.359 11:33:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:57.359 11:33:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:22:57.359 11:33:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:22:57.359 11:33:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4c1506f8-d832-4260-8aec-40f643cdd381 00:22:57.618 11:33:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:57.618 { 00:22:57.618 "name": "4c1506f8-d832-4260-8aec-40f643cdd381", 00:22:57.618 "aliases": [ 00:22:57.618 "lvs/nvme0n1p0" 00:22:57.618 ], 00:22:57.618 "product_name": "Logical Volume", 00:22:57.618 "block_size": 4096, 00:22:57.618 "num_blocks": 26476544, 00:22:57.618 "uuid": "4c1506f8-d832-4260-8aec-40f643cdd381", 00:22:57.618 "assigned_rate_limits": { 00:22:57.618 "rw_ios_per_sec": 0, 00:22:57.618 "rw_mbytes_per_sec": 0, 00:22:57.618 "r_mbytes_per_sec": 0, 00:22:57.618 "w_mbytes_per_sec": 0 00:22:57.618 }, 00:22:57.618 "claimed": false, 00:22:57.618 "zoned": false, 00:22:57.618 "supported_io_types": { 00:22:57.618 "read": true, 00:22:57.618 "write": true, 00:22:57.618 "unmap": true, 00:22:57.618 "flush": false, 00:22:57.618 "reset": true, 00:22:57.618 "nvme_admin": false, 00:22:57.618 "nvme_io": false, 00:22:57.618 "nvme_io_md": false, 00:22:57.618 "write_zeroes": true, 00:22:57.618 "zcopy": false, 00:22:57.618 "get_zone_info": false, 00:22:57.618 "zone_management": false, 00:22:57.618 "zone_append": false, 00:22:57.618 "compare": false, 00:22:57.618 "compare_and_write": false, 00:22:57.618 "abort": false, 00:22:57.618 "seek_hole": true, 00:22:57.618 "seek_data": true, 00:22:57.618 "copy": false, 00:22:57.618 "nvme_iov_md": false 00:22:57.618 }, 00:22:57.618 "driver_specific": { 00:22:57.618 "lvol": { 00:22:57.618 "lvol_store_uuid": "d508d576-60a3-47a3-818d-52c57286af2b", 00:22:57.618 "base_bdev": "nvme0n1", 00:22:57.618 "thin_provision": true, 00:22:57.618 "num_allocated_clusters": 0, 00:22:57.618 "snapshot": false, 00:22:57.618 "clone": false, 00:22:57.618 "esnap_clone": false 00:22:57.618 } 00:22:57.618 } 00:22:57.618 } 00:22:57.618 ]' 00:22:57.618 11:33:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:57.618 11:33:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:22:57.618 11:33:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:57.618 11:33:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:57.618 11:33:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:57.618 11:33:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:22:57.618 11:33:39 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:22:57.618 11:33:39 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:22:57.618 11:33:39 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:22:57.878 11:33:39 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:57.878 11:33:39 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:57.878 11:33:39 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 4c1506f8-d832-4260-8aec-40f643cdd381 00:22:57.878 11:33:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=4c1506f8-d832-4260-8aec-40f643cdd381 00:22:57.878 11:33:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:57.878 11:33:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:22:57.878 11:33:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:22:57.878 11:33:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4c1506f8-d832-4260-8aec-40f643cdd381 00:22:58.136 11:33:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:58.136 { 00:22:58.136 "name": "4c1506f8-d832-4260-8aec-40f643cdd381", 00:22:58.136 "aliases": [ 00:22:58.136 "lvs/nvme0n1p0" 00:22:58.136 ], 00:22:58.136 "product_name": "Logical Volume", 00:22:58.136 "block_size": 4096, 00:22:58.136 "num_blocks": 26476544, 00:22:58.136 "uuid": "4c1506f8-d832-4260-8aec-40f643cdd381", 00:22:58.136 "assigned_rate_limits": { 00:22:58.136 "rw_ios_per_sec": 0, 00:22:58.136 "rw_mbytes_per_sec": 0, 00:22:58.136 "r_mbytes_per_sec": 0, 00:22:58.136 "w_mbytes_per_sec": 0 00:22:58.136 }, 00:22:58.136 "claimed": false, 00:22:58.136 "zoned": false, 00:22:58.136 "supported_io_types": { 00:22:58.136 "read": true, 00:22:58.136 "write": true, 00:22:58.136 "unmap": true, 00:22:58.136 "flush": false, 00:22:58.136 "reset": true, 00:22:58.136 "nvme_admin": false, 00:22:58.136 "nvme_io": false, 00:22:58.136 "nvme_io_md": false, 00:22:58.136 "write_zeroes": true, 00:22:58.136 "zcopy": false, 00:22:58.136 "get_zone_info": false, 00:22:58.136 "zone_management": false, 00:22:58.136 "zone_append": false, 00:22:58.136 "compare": false, 00:22:58.136 "compare_and_write": false, 00:22:58.136 "abort": false, 00:22:58.136 "seek_hole": true, 00:22:58.136 "seek_data": true, 00:22:58.136 "copy": false, 00:22:58.136 "nvme_iov_md": false 00:22:58.136 }, 00:22:58.136 "driver_specific": { 00:22:58.136 "lvol": { 00:22:58.136 "lvol_store_uuid": "d508d576-60a3-47a3-818d-52c57286af2b", 00:22:58.136 "base_bdev": "nvme0n1", 00:22:58.136 "thin_provision": true, 00:22:58.136 "num_allocated_clusters": 0, 00:22:58.136 "snapshot": false, 00:22:58.136 "clone": false, 00:22:58.136 "esnap_clone": false 00:22:58.136 } 00:22:58.136 } 00:22:58.136 } 00:22:58.136 ]' 00:22:58.136 11:33:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:58.394 11:33:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:22:58.394 11:33:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:58.394 11:33:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:58.394 11:33:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:58.394 11:33:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:22:58.394 11:33:39 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:22:58.394 11:33:39 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:58.652 11:33:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:22:58.652 11:33:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 4c1506f8-d832-4260-8aec-40f643cdd381 00:22:58.652 11:33:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=4c1506f8-d832-4260-8aec-40f643cdd381 00:22:58.652 11:33:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:58.652 11:33:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:22:58.652 11:33:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:22:58.652 11:33:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4c1506f8-d832-4260-8aec-40f643cdd381 00:22:58.652 11:33:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:58.652 { 00:22:58.652 "name": "4c1506f8-d832-4260-8aec-40f643cdd381", 00:22:58.652 "aliases": [ 00:22:58.652 "lvs/nvme0n1p0" 00:22:58.652 ], 00:22:58.652 "product_name": "Logical Volume", 00:22:58.652 "block_size": 4096, 00:22:58.652 "num_blocks": 26476544, 00:22:58.652 "uuid": "4c1506f8-d832-4260-8aec-40f643cdd381", 00:22:58.652 "assigned_rate_limits": { 00:22:58.652 "rw_ios_per_sec": 0, 00:22:58.652 "rw_mbytes_per_sec": 0, 00:22:58.652 "r_mbytes_per_sec": 0, 00:22:58.652 "w_mbytes_per_sec": 0 00:22:58.652 }, 00:22:58.652 "claimed": false, 00:22:58.652 "zoned": false, 00:22:58.652 "supported_io_types": { 00:22:58.653 "read": true, 00:22:58.653 "write": true, 00:22:58.653 "unmap": true, 00:22:58.653 "flush": false, 00:22:58.653 "reset": true, 00:22:58.653 "nvme_admin": false, 00:22:58.653 "nvme_io": false, 00:22:58.653 "nvme_io_md": false, 00:22:58.653 "write_zeroes": true, 00:22:58.653 "zcopy": false, 00:22:58.653 "get_zone_info": false, 00:22:58.653 "zone_management": false, 00:22:58.653 "zone_append": false, 00:22:58.653 "compare": false, 00:22:58.653 "compare_and_write": false, 00:22:58.653 "abort": false, 00:22:58.653 "seek_hole": true, 00:22:58.653 "seek_data": true, 00:22:58.653 "copy": false, 00:22:58.653 "nvme_iov_md": false 00:22:58.653 }, 00:22:58.653 "driver_specific": { 00:22:58.653 "lvol": { 00:22:58.653 "lvol_store_uuid": "d508d576-60a3-47a3-818d-52c57286af2b", 00:22:58.653 "base_bdev": "nvme0n1", 00:22:58.653 "thin_provision": true, 00:22:58.653 "num_allocated_clusters": 0, 00:22:58.653 "snapshot": false, 00:22:58.653 "clone": false, 00:22:58.653 "esnap_clone": false 00:22:58.653 } 00:22:58.653 } 00:22:58.653 } 00:22:58.653 ]' 00:22:58.653 11:33:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:58.911 11:33:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:22:58.911 11:33:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:58.911 11:33:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:58.911 11:33:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:58.911 11:33:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:22:58.911 11:33:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:22:58.911 11:33:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 4c1506f8-d832-4260-8aec-40f643cdd381 -c nvc0n1p0 --l2p_dram_limit 20 00:22:58.911 [2024-10-07 11:33:40.610215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.911 [2024-10-07 11:33:40.610479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:58.911 [2024-10-07 11:33:40.610524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:58.911 [2024-10-07 11:33:40.610539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.911 [2024-10-07 11:33:40.610625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.911 [2024-10-07 11:33:40.610641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:58.911 [2024-10-07 11:33:40.610652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:22:58.911 [2024-10-07 11:33:40.610665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.911 [2024-10-07 11:33:40.610687] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:58.911 [2024-10-07 11:33:40.611767] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:58.911 [2024-10-07 11:33:40.611795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.912 [2024-10-07 11:33:40.611808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:58.912 [2024-10-07 11:33:40.611820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.115 ms 00:22:58.912 [2024-10-07 11:33:40.611833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.912 [2024-10-07 11:33:40.611984] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 6ccd40ca-d371-4042-8c3e-7347248ba751 00:22:58.912 [2024-10-07 11:33:40.613419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.912 [2024-10-07 11:33:40.613454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:58.912 [2024-10-07 11:33:40.613472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:22:58.912 [2024-10-07 11:33:40.613482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.912 [2024-10-07 11:33:40.621159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.912 [2024-10-07 11:33:40.621328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:58.912 [2024-10-07 11:33:40.621443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.631 ms 00:22:59.171 [2024-10-07 11:33:40.621481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.171 [2024-10-07 11:33:40.621615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.171 [2024-10-07 11:33:40.621629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:59.171 [2024-10-07 11:33:40.621647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:22:59.171 [2024-10-07 11:33:40.621658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.171 [2024-10-07 11:33:40.621744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.171 [2024-10-07 11:33:40.621789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:59.171 [2024-10-07 11:33:40.621808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:59.171 [2024-10-07 11:33:40.621819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.171 [2024-10-07 11:33:40.621849] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:59.171 [2024-10-07 11:33:40.627429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.171 [2024-10-07 11:33:40.627465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:59.171 [2024-10-07 11:33:40.627478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.602 ms 00:22:59.171 [2024-10-07 11:33:40.627490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.171 [2024-10-07 11:33:40.627522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.171 [2024-10-07 11:33:40.627536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:59.171 [2024-10-07 11:33:40.627547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:59.171 [2024-10-07 11:33:40.627560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.171 [2024-10-07 11:33:40.627606] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:59.171 [2024-10-07 11:33:40.627756] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:59.171 [2024-10-07 11:33:40.627772] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:59.171 [2024-10-07 11:33:40.627789] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:59.171 [2024-10-07 11:33:40.627802] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:59.171 [2024-10-07 11:33:40.627817] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:59.171 [2024-10-07 11:33:40.627829] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:59.171 [2024-10-07 11:33:40.627845] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:59.171 [2024-10-07 11:33:40.627855] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:59.171 [2024-10-07 11:33:40.627867] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:59.171 [2024-10-07 11:33:40.627878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.171 [2024-10-07 11:33:40.627890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:59.171 [2024-10-07 11:33:40.627901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:22:59.171 [2024-10-07 11:33:40.627915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.171 [2024-10-07 11:33:40.627996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.171 [2024-10-07 11:33:40.628010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:59.171 [2024-10-07 11:33:40.628020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:22:59.171 [2024-10-07 11:33:40.628035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.171 [2024-10-07 11:33:40.628119] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:59.171 [2024-10-07 11:33:40.628133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:59.171 [2024-10-07 11:33:40.628144] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:59.171 [2024-10-07 11:33:40.628157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:59.171 [2024-10-07 11:33:40.628167] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:59.171 [2024-10-07 11:33:40.628179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:59.171 [2024-10-07 11:33:40.628188] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:59.171 [2024-10-07 11:33:40.628200] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:59.171 [2024-10-07 11:33:40.628209] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:59.171 [2024-10-07 11:33:40.628221] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:59.171 [2024-10-07 11:33:40.628231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:59.171 [2024-10-07 11:33:40.628254] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:59.171 [2024-10-07 11:33:40.628264] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:59.171 [2024-10-07 11:33:40.628277] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:59.171 [2024-10-07 11:33:40.628286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:59.171 [2024-10-07 11:33:40.628301] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:59.171 [2024-10-07 11:33:40.628311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:59.171 [2024-10-07 11:33:40.628324] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:59.171 [2024-10-07 11:33:40.628334] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:59.171 [2024-10-07 11:33:40.628346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:59.171 [2024-10-07 11:33:40.628355] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:59.171 [2024-10-07 11:33:40.628366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:59.171 [2024-10-07 11:33:40.628375] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:59.171 [2024-10-07 11:33:40.628387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:59.171 [2024-10-07 11:33:40.628396] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:59.171 [2024-10-07 11:33:40.628408] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:59.171 [2024-10-07 11:33:40.628417] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:59.171 [2024-10-07 11:33:40.628429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:59.171 [2024-10-07 11:33:40.628438] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:59.171 [2024-10-07 11:33:40.628450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:59.171 [2024-10-07 11:33:40.628459] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:59.171 [2024-10-07 11:33:40.628473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:59.171 [2024-10-07 11:33:40.628483] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:59.171 [2024-10-07 11:33:40.628494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:59.171 [2024-10-07 11:33:40.628503] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:59.171 [2024-10-07 11:33:40.628515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:59.171 [2024-10-07 11:33:40.628524] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:59.171 [2024-10-07 11:33:40.628535] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:59.171 [2024-10-07 11:33:40.628544] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:59.171 [2024-10-07 11:33:40.628555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:59.171 [2024-10-07 11:33:40.628564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:59.171 [2024-10-07 11:33:40.628576] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:59.171 [2024-10-07 11:33:40.628585] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:59.171 [2024-10-07 11:33:40.628597] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:59.171 [2024-10-07 11:33:40.628608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:59.171 [2024-10-07 11:33:40.628622] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:59.171 [2024-10-07 11:33:40.628633] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:59.171 [2024-10-07 11:33:40.628647] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:59.171 [2024-10-07 11:33:40.628657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:59.171 [2024-10-07 11:33:40.628669] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:59.171 [2024-10-07 11:33:40.628679] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:59.171 [2024-10-07 11:33:40.628691] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:59.171 [2024-10-07 11:33:40.628700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:59.171 [2024-10-07 11:33:40.628716] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:59.171 [2024-10-07 11:33:40.628731] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:59.172 [2024-10-07 11:33:40.628756] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:59.172 [2024-10-07 11:33:40.628767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:59.172 [2024-10-07 11:33:40.628779] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:59.172 [2024-10-07 11:33:40.628790] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:59.172 [2024-10-07 11:33:40.628802] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:59.172 [2024-10-07 11:33:40.628813] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:59.172 [2024-10-07 11:33:40.628826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:59.172 [2024-10-07 11:33:40.628836] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:59.172 [2024-10-07 11:33:40.628851] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:59.172 [2024-10-07 11:33:40.628861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:59.172 [2024-10-07 11:33:40.628873] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:59.172 [2024-10-07 11:33:40.628884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:59.172 [2024-10-07 11:33:40.628899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:59.172 [2024-10-07 11:33:40.628909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:59.172 [2024-10-07 11:33:40.628922] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:59.172 [2024-10-07 11:33:40.628933] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:59.172 [2024-10-07 11:33:40.628946] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:59.172 [2024-10-07 11:33:40.628957] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:59.172 [2024-10-07 11:33:40.628969] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:59.172 [2024-10-07 11:33:40.628980] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:59.172 [2024-10-07 11:33:40.628993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.172 [2024-10-07 11:33:40.629004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:59.172 [2024-10-07 11:33:40.629018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.926 ms 00:22:59.172 [2024-10-07 11:33:40.629028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.172 [2024-10-07 11:33:40.629070] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:22:59.172 [2024-10-07 11:33:40.629083] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:23:02.514 [2024-10-07 11:33:43.549303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.514 [2024-10-07 11:33:43.549358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:23:02.514 [2024-10-07 11:33:43.549379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2924.969 ms 00:23:02.514 [2024-10-07 11:33:43.549391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.514 [2024-10-07 11:33:43.601675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.514 [2024-10-07 11:33:43.601885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:02.514 [2024-10-07 11:33:43.602028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.036 ms 00:23:02.514 [2024-10-07 11:33:43.602069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.514 [2024-10-07 11:33:43.602251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.514 [2024-10-07 11:33:43.602279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:02.514 [2024-10-07 11:33:43.602298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:23:02.514 [2024-10-07 11:33:43.602311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.514 [2024-10-07 11:33:43.649549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.514 [2024-10-07 11:33:43.649595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:02.514 [2024-10-07 11:33:43.649615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.249 ms 00:23:02.514 [2024-10-07 11:33:43.649629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.514 [2024-10-07 11:33:43.649679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.514 [2024-10-07 11:33:43.649689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:02.514 [2024-10-07 11:33:43.649703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:02.514 [2024-10-07 11:33:43.649713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.514 [2024-10-07 11:33:43.650204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.514 [2024-10-07 11:33:43.650223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:02.514 [2024-10-07 11:33:43.650238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.413 ms 00:23:02.514 [2024-10-07 11:33:43.650249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.514 [2024-10-07 11:33:43.650368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.514 [2024-10-07 11:33:43.650391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:02.514 [2024-10-07 11:33:43.650407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:23:02.514 [2024-10-07 11:33:43.650418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.514 [2024-10-07 11:33:43.668830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.514 [2024-10-07 11:33:43.668874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:02.514 [2024-10-07 11:33:43.668892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.419 ms 00:23:02.515 [2024-10-07 11:33:43.668903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.515 [2024-10-07 11:33:43.681805] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:23:02.515 [2024-10-07 11:33:43.687658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.515 [2024-10-07 11:33:43.687711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:02.515 [2024-10-07 11:33:43.687725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.686 ms 00:23:02.515 [2024-10-07 11:33:43.687754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.515 [2024-10-07 11:33:43.773262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.515 [2024-10-07 11:33:43.773335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:23:02.515 [2024-10-07 11:33:43.773352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.602 ms 00:23:02.515 [2024-10-07 11:33:43.773366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.515 [2024-10-07 11:33:43.773562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.515 [2024-10-07 11:33:43.773583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:02.515 [2024-10-07 11:33:43.773595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.150 ms 00:23:02.515 [2024-10-07 11:33:43.773608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.515 [2024-10-07 11:33:43.811183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.515 [2024-10-07 11:33:43.811409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:23:02.515 [2024-10-07 11:33:43.811435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.581 ms 00:23:02.515 [2024-10-07 11:33:43.811451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.515 [2024-10-07 11:33:43.848119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.515 [2024-10-07 11:33:43.848184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:23:02.515 [2024-10-07 11:33:43.848202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.617 ms 00:23:02.515 [2024-10-07 11:33:43.848214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.515 [2024-10-07 11:33:43.848927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.515 [2024-10-07 11:33:43.848951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:02.515 [2024-10-07 11:33:43.848963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.674 ms 00:23:02.515 [2024-10-07 11:33:43.848979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.515 [2024-10-07 11:33:43.945773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.515 [2024-10-07 11:33:43.945849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:23:02.515 [2024-10-07 11:33:43.945867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 96.896 ms 00:23:02.515 [2024-10-07 11:33:43.945880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.515 [2024-10-07 11:33:43.983862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.515 [2024-10-07 11:33:43.983927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:23:02.515 [2024-10-07 11:33:43.983944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.953 ms 00:23:02.515 [2024-10-07 11:33:43.983958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.515 [2024-10-07 11:33:44.021752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.515 [2024-10-07 11:33:44.021816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:23:02.515 [2024-10-07 11:33:44.021833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.808 ms 00:23:02.515 [2024-10-07 11:33:44.021846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.515 [2024-10-07 11:33:44.058190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.515 [2024-10-07 11:33:44.058258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:02.515 [2024-10-07 11:33:44.058281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.358 ms 00:23:02.515 [2024-10-07 11:33:44.058295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.515 [2024-10-07 11:33:44.058338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.515 [2024-10-07 11:33:44.058356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:02.515 [2024-10-07 11:33:44.058367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:02.515 [2024-10-07 11:33:44.058380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.515 [2024-10-07 11:33:44.058486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.515 [2024-10-07 11:33:44.058505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:02.515 [2024-10-07 11:33:44.058515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:23:02.515 [2024-10-07 11:33:44.058528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.515 [2024-10-07 11:33:44.059783] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3454.553 ms, result 0 00:23:02.515 { 00:23:02.515 "name": "ftl0", 00:23:02.515 "uuid": "6ccd40ca-d371-4042-8c3e-7347248ba751" 00:23:02.515 } 00:23:02.515 11:33:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:23:02.515 11:33:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:23:02.515 11:33:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:23:02.774 11:33:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:23:02.774 [2024-10-07 11:33:44.415790] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:23:02.774 I/O size of 69632 is greater than zero copy threshold (65536). 00:23:02.774 Zero copy mechanism will not be used. 00:23:02.774 Running I/O for 4 seconds... 00:23:04.716 1776.00 IOPS, 117.94 MiB/s [2024-10-07T11:33:47.804Z] 1820.50 IOPS, 120.89 MiB/s [2024-10-07T11:33:48.739Z] 1861.67 IOPS, 123.63 MiB/s [2024-10-07T11:33:48.739Z] 1853.75 IOPS, 123.10 MiB/s 00:23:07.028 Latency(us) 00:23:07.028 [2024-10-07T11:33:48.739Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:07.028 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:23:07.028 ftl0 : 4.00 1853.25 123.07 0.00 0.00 564.46 217.14 2197.69 00:23:07.028 [2024-10-07T11:33:48.739Z] =================================================================================================================== 00:23:07.028 [2024-10-07T11:33:48.739Z] Total : 1853.25 123.07 0.00 0.00 564.46 217.14 2197.69 00:23:07.028 [2024-10-07 11:33:48.420360] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:23:07.028 { 00:23:07.028 "results": [ 00:23:07.028 { 00:23:07.028 "job": "ftl0", 00:23:07.028 "core_mask": "0x1", 00:23:07.028 "workload": "randwrite", 00:23:07.028 "status": "finished", 00:23:07.028 "queue_depth": 1, 00:23:07.028 "io_size": 69632, 00:23:07.028 "runtime": 4.001619, 00:23:07.028 "iops": 1853.249897104147, 00:23:07.028 "mibps": 123.06737597957226, 00:23:07.028 "io_failed": 0, 00:23:07.029 "io_timeout": 0, 00:23:07.029 "avg_latency_us": 564.4621272576823, 00:23:07.029 "min_latency_us": 217.13734939759036, 00:23:07.029 "max_latency_us": 2197.6931726907633 00:23:07.029 } 00:23:07.029 ], 00:23:07.029 "core_count": 1 00:23:07.029 } 00:23:07.029 11:33:48 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:23:07.029 [2024-10-07 11:33:48.557813] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:23:07.029 Running I/O for 4 seconds... 00:23:08.897 11294.00 IOPS, 44.12 MiB/s [2024-10-07T11:33:51.983Z] 10755.00 IOPS, 42.01 MiB/s [2024-10-07T11:33:52.919Z] 10756.67 IOPS, 42.02 MiB/s [2024-10-07T11:33:52.919Z] 10661.00 IOPS, 41.64 MiB/s 00:23:11.208 Latency(us) 00:23:11.208 [2024-10-07T11:33:52.919Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:11.208 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:23:11.208 ftl0 : 4.02 10651.17 41.61 0.00 0.00 11993.40 228.65 38953.12 00:23:11.208 [2024-10-07T11:33:52.919Z] =================================================================================================================== 00:23:11.208 [2024-10-07T11:33:52.919Z] Total : 10651.17 41.61 0.00 0.00 11993.40 0.00 38953.12 00:23:11.208 { 00:23:11.208 "results": [ 00:23:11.208 { 00:23:11.208 "job": "ftl0", 00:23:11.208 "core_mask": "0x1", 00:23:11.208 "workload": "randwrite", 00:23:11.208 "status": "finished", 00:23:11.208 "queue_depth": 128, 00:23:11.208 "io_size": 4096, 00:23:11.208 "runtime": 4.015429, 00:23:11.208 "iops": 10651.165790753616, 00:23:11.208 "mibps": 41.60611637013131, 00:23:11.208 "io_failed": 0, 00:23:11.208 "io_timeout": 0, 00:23:11.208 "avg_latency_us": 11993.404150455783, 00:23:11.208 "min_latency_us": 228.65220883534136, 00:23:11.208 "max_latency_us": 38953.124497991965 00:23:11.208 } 00:23:11.208 ], 00:23:11.208 "core_count": 1 00:23:11.208 } 00:23:11.208 [2024-10-07 11:33:52.577363] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:23:11.208 11:33:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:23:11.208 [2024-10-07 11:33:52.698463] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:23:11.208 Running I/O for 4 seconds... 00:23:13.079 8245.00 IOPS, 32.21 MiB/s [2024-10-07T11:33:55.744Z] 8362.00 IOPS, 32.66 MiB/s [2024-10-07T11:33:57.121Z] 8358.67 IOPS, 32.65 MiB/s [2024-10-07T11:33:57.121Z] 8468.50 IOPS, 33.08 MiB/s 00:23:15.410 Latency(us) 00:23:15.410 [2024-10-07T11:33:57.121Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:15.410 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:15.410 Verification LBA range: start 0x0 length 0x1400000 00:23:15.410 ftl0 : 4.01 8480.68 33.13 0.00 0.00 15048.06 254.97 30530.83 00:23:15.410 [2024-10-07T11:33:57.121Z] =================================================================================================================== 00:23:15.410 [2024-10-07T11:33:57.121Z] Total : 8480.68 33.13 0.00 0.00 15048.06 0.00 30530.83 00:23:15.410 [2024-10-07 11:33:56.721265] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:23:15.410 { 00:23:15.410 "results": [ 00:23:15.410 { 00:23:15.410 "job": "ftl0", 00:23:15.410 "core_mask": "0x1", 00:23:15.410 "workload": "verify", 00:23:15.410 "status": "finished", 00:23:15.410 "verify_range": { 00:23:15.410 "start": 0, 00:23:15.410 "length": 20971520 00:23:15.410 }, 00:23:15.410 "queue_depth": 128, 00:23:15.410 "io_size": 4096, 00:23:15.410 "runtime": 4.009346, 00:23:15.410 "iops": 8480.684879778399, 00:23:15.410 "mibps": 33.12767531163437, 00:23:15.410 "io_failed": 0, 00:23:15.410 "io_timeout": 0, 00:23:15.410 "avg_latency_us": 15048.064792597836, 00:23:15.410 "min_latency_us": 254.9718875502008, 00:23:15.410 "max_latency_us": 30530.82730923695 00:23:15.410 } 00:23:15.410 ], 00:23:15.410 "core_count": 1 00:23:15.410 } 00:23:15.410 11:33:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:23:15.410 [2024-10-07 11:33:56.920694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.410 [2024-10-07 11:33:56.920767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:15.410 [2024-10-07 11:33:56.920785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:15.410 [2024-10-07 11:33:56.920799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.410 [2024-10-07 11:33:56.920823] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:15.410 [2024-10-07 11:33:56.924915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.410 [2024-10-07 11:33:56.924947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:15.410 [2024-10-07 11:33:56.924964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.074 ms 00:23:15.410 [2024-10-07 11:33:56.924974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.410 [2024-10-07 11:33:56.926812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.410 [2024-10-07 11:33:56.926973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:15.410 [2024-10-07 11:33:56.927005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.798 ms 00:23:15.410 [2024-10-07 11:33:56.927016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.670 [2024-10-07 11:33:57.121392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.670 [2024-10-07 11:33:57.121460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:15.670 [2024-10-07 11:33:57.121487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 194.648 ms 00:23:15.670 [2024-10-07 11:33:57.121499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.670 [2024-10-07 11:33:57.126721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.670 [2024-10-07 11:33:57.126798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:15.670 [2024-10-07 11:33:57.126816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.182 ms 00:23:15.670 [2024-10-07 11:33:57.126827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.670 [2024-10-07 11:33:57.163737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.670 [2024-10-07 11:33:57.163804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:15.670 [2024-10-07 11:33:57.163825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.859 ms 00:23:15.670 [2024-10-07 11:33:57.163836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.670 [2024-10-07 11:33:57.186134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.670 [2024-10-07 11:33:57.186324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:15.670 [2024-10-07 11:33:57.186356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.287 ms 00:23:15.670 [2024-10-07 11:33:57.186368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.670 [2024-10-07 11:33:57.186526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.670 [2024-10-07 11:33:57.186540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:15.670 [2024-10-07 11:33:57.186559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:23:15.670 [2024-10-07 11:33:57.186569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.670 [2024-10-07 11:33:57.223632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.670 [2024-10-07 11:33:57.223678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:15.670 [2024-10-07 11:33:57.223696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.096 ms 00:23:15.670 [2024-10-07 11:33:57.223707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.670 [2024-10-07 11:33:57.260498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.670 [2024-10-07 11:33:57.260542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:15.670 [2024-10-07 11:33:57.260561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.791 ms 00:23:15.670 [2024-10-07 11:33:57.260571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.670 [2024-10-07 11:33:57.296724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.670 [2024-10-07 11:33:57.296773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:15.670 [2024-10-07 11:33:57.296791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.165 ms 00:23:15.670 [2024-10-07 11:33:57.296802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.670 [2024-10-07 11:33:57.334008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.670 [2024-10-07 11:33:57.334059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:15.670 [2024-10-07 11:33:57.334083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.165 ms 00:23:15.670 [2024-10-07 11:33:57.334094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.670 [2024-10-07 11:33:57.334142] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:15.670 [2024-10-07 11:33:57.334160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:15.670 [2024-10-07 11:33:57.334180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:15.670 [2024-10-07 11:33:57.334191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:15.670 [2024-10-07 11:33:57.334205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:15.670 [2024-10-07 11:33:57.334217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:15.670 [2024-10-07 11:33:57.334230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:15.670 [2024-10-07 11:33:57.334241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:15.670 [2024-10-07 11:33:57.334254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:15.670 [2024-10-07 11:33:57.334265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:15.670 [2024-10-07 11:33:57.334287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:15.670 [2024-10-07 11:33:57.334298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:15.670 [2024-10-07 11:33:57.334311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:15.670 [2024-10-07 11:33:57.334322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:15.670 [2024-10-07 11:33:57.334338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:15.670 [2024-10-07 11:33:57.334350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:15.670 [2024-10-07 11:33:57.334363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:15.670 [2024-10-07 11:33:57.334374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:15.670 [2024-10-07 11:33:57.334389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:15.670 [2024-10-07 11:33:57.334401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:15.670 [2024-10-07 11:33:57.334414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:15.670 [2024-10-07 11:33:57.334425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:15.670 [2024-10-07 11:33:57.334438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:15.670 [2024-10-07 11:33:57.334449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:15.670 [2024-10-07 11:33:57.334463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:15.670 [2024-10-07 11:33:57.334474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:15.670 [2024-10-07 11:33:57.334487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:15.670 [2024-10-07 11:33:57.334499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:15.670 [2024-10-07 11:33:57.334512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:15.670 [2024-10-07 11:33:57.334523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:15.670 [2024-10-07 11:33:57.334539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:15.670 [2024-10-07 11:33:57.334556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:15.670 [2024-10-07 11:33:57.334569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:15.670 [2024-10-07 11:33:57.334579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:15.670 [2024-10-07 11:33:57.334593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:15.670 [2024-10-07 11:33:57.334604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:15.670 [2024-10-07 11:33:57.334617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:15.670 [2024-10-07 11:33:57.334628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:15.670 [2024-10-07 11:33:57.334641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:15.670 [2024-10-07 11:33:57.334652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:15.670 [2024-10-07 11:33:57.334665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:15.670 [2024-10-07 11:33:57.334676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:15.671 [2024-10-07 11:33:57.334689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:15.671 [2024-10-07 11:33:57.334700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:15.671 [2024-10-07 11:33:57.334713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:15.671 [2024-10-07 11:33:57.334736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:15.671 [2024-10-07 11:33:57.334771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:15.671 [2024-10-07 11:33:57.334783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:15.671 [2024-10-07 11:33:57.334796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:15.671 [2024-10-07 11:33:57.334807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:15.671 [2024-10-07 11:33:57.334820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:15.671 [2024-10-07 11:33:57.334831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:15.671 [2024-10-07 11:33:57.334845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:15.671 [2024-10-07 11:33:57.334855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:15.671 [2024-10-07 11:33:57.334868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:15.671 [2024-10-07 11:33:57.334879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:15.671 [2024-10-07 11:33:57.334893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:15.671 [2024-10-07 11:33:57.334904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:15.671 [2024-10-07 11:33:57.334917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:15.671 [2024-10-07 11:33:57.334928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:15.671 [2024-10-07 11:33:57.334941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:15.671 [2024-10-07 11:33:57.334952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:15.671 [2024-10-07 11:33:57.334967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:15.671 [2024-10-07 11:33:57.334980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:15.671 [2024-10-07 11:33:57.334994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:15.671 [2024-10-07 11:33:57.335005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:15.671 [2024-10-07 11:33:57.335018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:15.671 [2024-10-07 11:33:57.335029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:15.671 [2024-10-07 11:33:57.335042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:15.671 [2024-10-07 11:33:57.335053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:15.671 [2024-10-07 11:33:57.335078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:15.671 [2024-10-07 11:33:57.335089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:15.671 [2024-10-07 11:33:57.335103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:15.671 [2024-10-07 11:33:57.335114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:15.671 [2024-10-07 11:33:57.335127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:15.671 [2024-10-07 11:33:57.335138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:15.671 [2024-10-07 11:33:57.335151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:15.671 [2024-10-07 11:33:57.335162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:15.671 [2024-10-07 11:33:57.335177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:15.671 [2024-10-07 11:33:57.335188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:15.671 [2024-10-07 11:33:57.335201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:15.671 [2024-10-07 11:33:57.335212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:15.671 [2024-10-07 11:33:57.335225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:15.671 [2024-10-07 11:33:57.335236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:15.671 [2024-10-07 11:33:57.335249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:15.671 [2024-10-07 11:33:57.335259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:15.671 [2024-10-07 11:33:57.335272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:15.671 [2024-10-07 11:33:57.335283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:15.671 [2024-10-07 11:33:57.335296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:15.671 [2024-10-07 11:33:57.335306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:15.671 [2024-10-07 11:33:57.335320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:15.671 [2024-10-07 11:33:57.335330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:15.671 [2024-10-07 11:33:57.335343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:15.671 [2024-10-07 11:33:57.335354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:15.671 [2024-10-07 11:33:57.335370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:15.671 [2024-10-07 11:33:57.335381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:15.671 [2024-10-07 11:33:57.335396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:15.671 [2024-10-07 11:33:57.335407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:15.671 [2024-10-07 11:33:57.335420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:15.671 [2024-10-07 11:33:57.335431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:15.671 [2024-10-07 11:33:57.335445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:15.671 [2024-10-07 11:33:57.335464] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:15.671 [2024-10-07 11:33:57.335477] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6ccd40ca-d371-4042-8c3e-7347248ba751 00:23:15.671 [2024-10-07 11:33:57.335488] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:15.671 [2024-10-07 11:33:57.335501] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:15.671 [2024-10-07 11:33:57.335511] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:15.671 [2024-10-07 11:33:57.335524] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:15.671 [2024-10-07 11:33:57.335534] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:15.671 [2024-10-07 11:33:57.335547] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:15.671 [2024-10-07 11:33:57.335557] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:15.671 [2024-10-07 11:33:57.335571] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:15.671 [2024-10-07 11:33:57.335581] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:15.671 [2024-10-07 11:33:57.335593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.671 [2024-10-07 11:33:57.335604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:15.671 [2024-10-07 11:33:57.335617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.455 ms 00:23:15.671 [2024-10-07 11:33:57.335629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.671 [2024-10-07 11:33:57.356370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.671 [2024-10-07 11:33:57.356535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:15.671 [2024-10-07 11:33:57.356563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.684 ms 00:23:15.671 [2024-10-07 11:33:57.356574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.671 [2024-10-07 11:33:57.357201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.671 [2024-10-07 11:33:57.357220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:15.671 [2024-10-07 11:33:57.357234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.594 ms 00:23:15.671 [2024-10-07 11:33:57.357244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.930 [2024-10-07 11:33:57.406861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:15.930 [2024-10-07 11:33:57.407056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:15.930 [2024-10-07 11:33:57.407090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:15.930 [2024-10-07 11:33:57.407108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.930 [2024-10-07 11:33:57.407183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:15.930 [2024-10-07 11:33:57.407195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:15.930 [2024-10-07 11:33:57.407212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:15.930 [2024-10-07 11:33:57.407222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.930 [2024-10-07 11:33:57.407328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:15.930 [2024-10-07 11:33:57.407342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:15.930 [2024-10-07 11:33:57.407356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:15.930 [2024-10-07 11:33:57.407366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.930 [2024-10-07 11:33:57.407387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:15.930 [2024-10-07 11:33:57.407397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:15.930 [2024-10-07 11:33:57.407410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:15.930 [2024-10-07 11:33:57.407423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.930 [2024-10-07 11:33:57.534946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:15.930 [2024-10-07 11:33:57.535013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:15.930 [2024-10-07 11:33:57.535037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:15.930 [2024-10-07 11:33:57.535047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.931 [2024-10-07 11:33:57.639016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:15.931 [2024-10-07 11:33:57.639081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:15.931 [2024-10-07 11:33:57.639104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:15.931 [2024-10-07 11:33:57.639115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.931 [2024-10-07 11:33:57.639238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:15.931 [2024-10-07 11:33:57.639251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:15.931 [2024-10-07 11:33:57.639265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:15.931 [2024-10-07 11:33:57.639276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.931 [2024-10-07 11:33:57.639333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:15.931 [2024-10-07 11:33:57.639345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:15.931 [2024-10-07 11:33:57.639358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:15.931 [2024-10-07 11:33:57.639368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.931 [2024-10-07 11:33:57.639491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:15.931 [2024-10-07 11:33:57.639505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:15.931 [2024-10-07 11:33:57.639522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:15.931 [2024-10-07 11:33:57.639532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.931 [2024-10-07 11:33:57.639572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:15.931 [2024-10-07 11:33:57.639585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:15.931 [2024-10-07 11:33:57.639598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:15.931 [2024-10-07 11:33:57.639608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.931 [2024-10-07 11:33:57.639652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:15.931 [2024-10-07 11:33:57.639664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:15.931 [2024-10-07 11:33:57.639676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:15.931 [2024-10-07 11:33:57.639687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.931 [2024-10-07 11:33:57.639736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:15.931 [2024-10-07 11:33:57.639787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:15.931 [2024-10-07 11:33:57.639801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:15.931 [2024-10-07 11:33:57.639811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.931 [2024-10-07 11:33:57.639951] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 720.377 ms, result 0 00:23:16.189 true 00:23:16.189 11:33:57 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 75780 00:23:16.189 11:33:57 ftl.ftl_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 75780 ']' 00:23:16.189 11:33:57 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # kill -0 75780 00:23:16.189 11:33:57 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # uname 00:23:16.189 11:33:57 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:16.189 11:33:57 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75780 00:23:16.189 killing process with pid 75780 00:23:16.189 Received shutdown signal, test time was about 4.000000 seconds 00:23:16.189 00:23:16.189 Latency(us) 00:23:16.189 [2024-10-07T11:33:57.900Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:16.189 [2024-10-07T11:33:57.900Z] =================================================================================================================== 00:23:16.189 [2024-10-07T11:33:57.900Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:16.189 11:33:57 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:16.189 11:33:57 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:16.189 11:33:57 ftl.ftl_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75780' 00:23:16.189 11:33:57 ftl.ftl_bdevperf -- common/autotest_common.sh@969 -- # kill 75780 00:23:16.189 11:33:57 ftl.ftl_bdevperf -- common/autotest_common.sh@974 -- # wait 75780 00:23:20.380 Remove shared memory files 00:23:20.380 11:34:01 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:23:20.380 11:34:01 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:23:20.380 11:34:01 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:23:20.380 11:34:01 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:23:20.380 11:34:01 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:23:20.380 11:34:01 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:23:20.381 11:34:01 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:23:20.381 11:34:01 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:23:20.381 ************************************ 00:23:20.381 END TEST ftl_bdevperf 00:23:20.381 ************************************ 00:23:20.381 00:23:20.381 real 0m25.512s 00:23:20.381 user 0m28.316s 00:23:20.381 sys 0m1.303s 00:23:20.381 11:34:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:20.381 11:34:01 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:20.381 11:34:01 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:23:20.381 11:34:01 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:23:20.381 11:34:01 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:20.381 11:34:01 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:20.381 ************************************ 00:23:20.381 START TEST ftl_trim 00:23:20.381 ************************************ 00:23:20.381 11:34:01 ftl.ftl_trim -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:23:20.381 * Looking for test storage... 00:23:20.381 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:20.381 11:34:01 ftl.ftl_trim -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:20.381 11:34:01 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # lcov --version 00:23:20.381 11:34:01 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:20.381 11:34:01 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:20.381 11:34:01 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:20.381 11:34:01 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:20.381 11:34:01 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:20.381 11:34:01 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:23:20.381 11:34:01 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:23:20.381 11:34:01 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:23:20.381 11:34:01 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:23:20.381 11:34:01 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:23:20.381 11:34:01 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:23:20.381 11:34:01 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:23:20.381 11:34:01 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:20.381 11:34:01 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:23:20.381 11:34:01 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:23:20.381 11:34:01 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:20.381 11:34:01 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:20.381 11:34:01 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:23:20.381 11:34:01 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:23:20.381 11:34:01 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:20.381 11:34:01 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:23:20.381 11:34:01 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:23:20.381 11:34:01 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:23:20.381 11:34:01 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:23:20.381 11:34:01 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:20.381 11:34:01 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:23:20.381 11:34:01 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:23:20.381 11:34:01 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:20.381 11:34:01 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:20.381 11:34:01 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:23:20.381 11:34:01 ftl.ftl_trim -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:20.381 11:34:01 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:20.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:20.381 --rc genhtml_branch_coverage=1 00:23:20.381 --rc genhtml_function_coverage=1 00:23:20.381 --rc genhtml_legend=1 00:23:20.381 --rc geninfo_all_blocks=1 00:23:20.381 --rc geninfo_unexecuted_blocks=1 00:23:20.381 00:23:20.381 ' 00:23:20.381 11:34:01 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:20.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:20.381 --rc genhtml_branch_coverage=1 00:23:20.381 --rc genhtml_function_coverage=1 00:23:20.381 --rc genhtml_legend=1 00:23:20.381 --rc geninfo_all_blocks=1 00:23:20.381 --rc geninfo_unexecuted_blocks=1 00:23:20.381 00:23:20.381 ' 00:23:20.381 11:34:01 ftl.ftl_trim -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:20.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:20.381 --rc genhtml_branch_coverage=1 00:23:20.381 --rc genhtml_function_coverage=1 00:23:20.381 --rc genhtml_legend=1 00:23:20.381 --rc geninfo_all_blocks=1 00:23:20.381 --rc geninfo_unexecuted_blocks=1 00:23:20.381 00:23:20.381 ' 00:23:20.381 11:34:01 ftl.ftl_trim -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:20.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:20.381 --rc genhtml_branch_coverage=1 00:23:20.381 --rc genhtml_function_coverage=1 00:23:20.381 --rc genhtml_legend=1 00:23:20.381 --rc geninfo_all_blocks=1 00:23:20.381 --rc geninfo_unexecuted_blocks=1 00:23:20.381 00:23:20.381 ' 00:23:20.381 11:34:01 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:20.381 11:34:01 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:23:20.381 11:34:01 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:20.381 11:34:01 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:20.381 11:34:01 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:20.381 11:34:01 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:20.381 11:34:01 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:20.381 11:34:01 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:20.381 11:34:01 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:20.381 11:34:01 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:20.381 11:34:01 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:20.381 11:34:01 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:20.381 11:34:01 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:20.381 11:34:01 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:20.381 11:34:01 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:20.381 11:34:01 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:20.381 11:34:01 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:20.381 11:34:01 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:20.381 11:34:01 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:20.382 11:34:01 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:20.382 11:34:01 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:20.382 11:34:01 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:20.382 11:34:01 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:20.382 11:34:01 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:20.382 11:34:01 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:20.382 11:34:01 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:20.382 11:34:01 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:20.382 11:34:01 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:20.382 11:34:01 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:20.382 11:34:01 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:20.382 11:34:01 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:23:20.382 11:34:01 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:23:20.382 11:34:01 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:23:20.382 11:34:01 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:23:20.382 11:34:01 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:23:20.382 11:34:01 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:23:20.382 11:34:01 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:23:20.382 11:34:01 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:23:20.382 11:34:01 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:20.382 11:34:01 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:20.382 11:34:01 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:23:20.382 11:34:01 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=76132 00:23:20.382 11:34:01 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 76132 00:23:20.382 11:34:01 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:23:20.382 11:34:01 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 76132 ']' 00:23:20.382 11:34:01 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:20.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:20.382 11:34:01 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:20.382 11:34:01 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:20.382 11:34:01 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:20.382 11:34:01 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:23:20.382 [2024-10-07 11:34:02.084912] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:23:20.382 [2024-10-07 11:34:02.085229] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76132 ] 00:23:20.653 [2024-10-07 11:34:02.259944] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:20.912 [2024-10-07 11:34:02.479972] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:23:20.912 [2024-10-07 11:34:02.480025] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:23:20.912 [2024-10-07 11:34:02.480062] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:23:21.857 11:34:03 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:21.857 11:34:03 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:23:21.857 11:34:03 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:23:21.857 11:34:03 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:23:21.857 11:34:03 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:23:21.857 11:34:03 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:23:21.857 11:34:03 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:23:21.857 11:34:03 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:23:22.128 11:34:03 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:23:22.128 11:34:03 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:23:22.128 11:34:03 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:23:22.128 11:34:03 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:23:22.128 11:34:03 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:22.128 11:34:03 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:23:22.128 11:34:03 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:23:22.128 11:34:03 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:23:22.387 11:34:03 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:22.387 { 00:23:22.387 "name": "nvme0n1", 00:23:22.387 "aliases": [ 00:23:22.387 "766cc9a0-0c90-4093-8d87-a3ffbb6ecfd1" 00:23:22.387 ], 00:23:22.387 "product_name": "NVMe disk", 00:23:22.387 "block_size": 4096, 00:23:22.387 "num_blocks": 1310720, 00:23:22.387 "uuid": "766cc9a0-0c90-4093-8d87-a3ffbb6ecfd1", 00:23:22.387 "numa_id": -1, 00:23:22.387 "assigned_rate_limits": { 00:23:22.387 "rw_ios_per_sec": 0, 00:23:22.387 "rw_mbytes_per_sec": 0, 00:23:22.387 "r_mbytes_per_sec": 0, 00:23:22.387 "w_mbytes_per_sec": 0 00:23:22.387 }, 00:23:22.387 "claimed": true, 00:23:22.387 "claim_type": "read_many_write_one", 00:23:22.387 "zoned": false, 00:23:22.387 "supported_io_types": { 00:23:22.387 "read": true, 00:23:22.387 "write": true, 00:23:22.387 "unmap": true, 00:23:22.387 "flush": true, 00:23:22.387 "reset": true, 00:23:22.387 "nvme_admin": true, 00:23:22.387 "nvme_io": true, 00:23:22.387 "nvme_io_md": false, 00:23:22.387 "write_zeroes": true, 00:23:22.387 "zcopy": false, 00:23:22.387 "get_zone_info": false, 00:23:22.387 "zone_management": false, 00:23:22.387 "zone_append": false, 00:23:22.387 "compare": true, 00:23:22.387 "compare_and_write": false, 00:23:22.387 "abort": true, 00:23:22.387 "seek_hole": false, 00:23:22.387 "seek_data": false, 00:23:22.387 "copy": true, 00:23:22.387 "nvme_iov_md": false 00:23:22.387 }, 00:23:22.387 "driver_specific": { 00:23:22.387 "nvme": [ 00:23:22.387 { 00:23:22.387 "pci_address": "0000:00:11.0", 00:23:22.387 "trid": { 00:23:22.387 "trtype": "PCIe", 00:23:22.387 "traddr": "0000:00:11.0" 00:23:22.387 }, 00:23:22.387 "ctrlr_data": { 00:23:22.387 "cntlid": 0, 00:23:22.387 "vendor_id": "0x1b36", 00:23:22.387 "model_number": "QEMU NVMe Ctrl", 00:23:22.387 "serial_number": "12341", 00:23:22.387 "firmware_revision": "8.0.0", 00:23:22.387 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:22.387 "oacs": { 00:23:22.387 "security": 0, 00:23:22.387 "format": 1, 00:23:22.387 "firmware": 0, 00:23:22.387 "ns_manage": 1 00:23:22.387 }, 00:23:22.387 "multi_ctrlr": false, 00:23:22.387 "ana_reporting": false 00:23:22.387 }, 00:23:22.387 "vs": { 00:23:22.387 "nvme_version": "1.4" 00:23:22.387 }, 00:23:22.387 "ns_data": { 00:23:22.387 "id": 1, 00:23:22.387 "can_share": false 00:23:22.387 } 00:23:22.387 } 00:23:22.387 ], 00:23:22.387 "mp_policy": "active_passive" 00:23:22.387 } 00:23:22.387 } 00:23:22.387 ]' 00:23:22.387 11:34:03 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:22.387 11:34:03 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:23:22.387 11:34:03 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:22.387 11:34:04 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=1310720 00:23:22.387 11:34:04 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:23:22.387 11:34:04 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 5120 00:23:22.387 11:34:04 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:23:22.387 11:34:04 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:23:22.387 11:34:04 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:23:22.387 11:34:04 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:22.387 11:34:04 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:23:22.645 11:34:04 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=d508d576-60a3-47a3-818d-52c57286af2b 00:23:22.645 11:34:04 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:23:22.645 11:34:04 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d508d576-60a3-47a3-818d-52c57286af2b 00:23:22.904 11:34:04 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:23:23.166 11:34:04 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=fcfc1214-167d-46e1-ab9f-0747f577ba37 00:23:23.166 11:34:04 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u fcfc1214-167d-46e1-ab9f-0747f577ba37 00:23:23.424 11:34:04 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=88b346d1-c7fe-4cad-bdcc-fbc09c59d129 00:23:23.424 11:34:04 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 88b346d1-c7fe-4cad-bdcc-fbc09c59d129 00:23:23.424 11:34:04 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:23:23.424 11:34:04 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:23:23.424 11:34:04 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=88b346d1-c7fe-4cad-bdcc-fbc09c59d129 00:23:23.424 11:34:04 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:23:23.424 11:34:04 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 88b346d1-c7fe-4cad-bdcc-fbc09c59d129 00:23:23.424 11:34:04 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=88b346d1-c7fe-4cad-bdcc-fbc09c59d129 00:23:23.424 11:34:04 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:23.424 11:34:04 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:23:23.424 11:34:04 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:23:23.424 11:34:04 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 88b346d1-c7fe-4cad-bdcc-fbc09c59d129 00:23:23.424 11:34:05 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:23.424 { 00:23:23.424 "name": "88b346d1-c7fe-4cad-bdcc-fbc09c59d129", 00:23:23.424 "aliases": [ 00:23:23.424 "lvs/nvme0n1p0" 00:23:23.424 ], 00:23:23.424 "product_name": "Logical Volume", 00:23:23.424 "block_size": 4096, 00:23:23.424 "num_blocks": 26476544, 00:23:23.424 "uuid": "88b346d1-c7fe-4cad-bdcc-fbc09c59d129", 00:23:23.424 "assigned_rate_limits": { 00:23:23.424 "rw_ios_per_sec": 0, 00:23:23.424 "rw_mbytes_per_sec": 0, 00:23:23.424 "r_mbytes_per_sec": 0, 00:23:23.424 "w_mbytes_per_sec": 0 00:23:23.424 }, 00:23:23.424 "claimed": false, 00:23:23.424 "zoned": false, 00:23:23.424 "supported_io_types": { 00:23:23.424 "read": true, 00:23:23.424 "write": true, 00:23:23.424 "unmap": true, 00:23:23.424 "flush": false, 00:23:23.424 "reset": true, 00:23:23.424 "nvme_admin": false, 00:23:23.424 "nvme_io": false, 00:23:23.424 "nvme_io_md": false, 00:23:23.424 "write_zeroes": true, 00:23:23.424 "zcopy": false, 00:23:23.424 "get_zone_info": false, 00:23:23.424 "zone_management": false, 00:23:23.424 "zone_append": false, 00:23:23.424 "compare": false, 00:23:23.424 "compare_and_write": false, 00:23:23.424 "abort": false, 00:23:23.424 "seek_hole": true, 00:23:23.424 "seek_data": true, 00:23:23.424 "copy": false, 00:23:23.424 "nvme_iov_md": false 00:23:23.424 }, 00:23:23.424 "driver_specific": { 00:23:23.424 "lvol": { 00:23:23.424 "lvol_store_uuid": "fcfc1214-167d-46e1-ab9f-0747f577ba37", 00:23:23.424 "base_bdev": "nvme0n1", 00:23:23.424 "thin_provision": true, 00:23:23.424 "num_allocated_clusters": 0, 00:23:23.424 "snapshot": false, 00:23:23.424 "clone": false, 00:23:23.424 "esnap_clone": false 00:23:23.424 } 00:23:23.424 } 00:23:23.424 } 00:23:23.424 ]' 00:23:23.424 11:34:05 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:23.682 11:34:05 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:23:23.682 11:34:05 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:23.682 11:34:05 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:23:23.682 11:34:05 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:23:23.682 11:34:05 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:23:23.682 11:34:05 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:23:23.682 11:34:05 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:23:23.682 11:34:05 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:23:23.940 11:34:05 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:23:23.940 11:34:05 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:23:23.940 11:34:05 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 88b346d1-c7fe-4cad-bdcc-fbc09c59d129 00:23:23.940 11:34:05 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=88b346d1-c7fe-4cad-bdcc-fbc09c59d129 00:23:23.940 11:34:05 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:23.940 11:34:05 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:23:23.940 11:34:05 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:23:23.940 11:34:05 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 88b346d1-c7fe-4cad-bdcc-fbc09c59d129 00:23:24.198 11:34:05 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:24.198 { 00:23:24.198 "name": "88b346d1-c7fe-4cad-bdcc-fbc09c59d129", 00:23:24.198 "aliases": [ 00:23:24.198 "lvs/nvme0n1p0" 00:23:24.198 ], 00:23:24.198 "product_name": "Logical Volume", 00:23:24.198 "block_size": 4096, 00:23:24.198 "num_blocks": 26476544, 00:23:24.198 "uuid": "88b346d1-c7fe-4cad-bdcc-fbc09c59d129", 00:23:24.198 "assigned_rate_limits": { 00:23:24.198 "rw_ios_per_sec": 0, 00:23:24.198 "rw_mbytes_per_sec": 0, 00:23:24.198 "r_mbytes_per_sec": 0, 00:23:24.198 "w_mbytes_per_sec": 0 00:23:24.198 }, 00:23:24.198 "claimed": false, 00:23:24.198 "zoned": false, 00:23:24.198 "supported_io_types": { 00:23:24.198 "read": true, 00:23:24.198 "write": true, 00:23:24.198 "unmap": true, 00:23:24.198 "flush": false, 00:23:24.198 "reset": true, 00:23:24.198 "nvme_admin": false, 00:23:24.198 "nvme_io": false, 00:23:24.198 "nvme_io_md": false, 00:23:24.198 "write_zeroes": true, 00:23:24.198 "zcopy": false, 00:23:24.198 "get_zone_info": false, 00:23:24.198 "zone_management": false, 00:23:24.198 "zone_append": false, 00:23:24.198 "compare": false, 00:23:24.198 "compare_and_write": false, 00:23:24.198 "abort": false, 00:23:24.198 "seek_hole": true, 00:23:24.198 "seek_data": true, 00:23:24.198 "copy": false, 00:23:24.198 "nvme_iov_md": false 00:23:24.198 }, 00:23:24.198 "driver_specific": { 00:23:24.198 "lvol": { 00:23:24.198 "lvol_store_uuid": "fcfc1214-167d-46e1-ab9f-0747f577ba37", 00:23:24.198 "base_bdev": "nvme0n1", 00:23:24.198 "thin_provision": true, 00:23:24.198 "num_allocated_clusters": 0, 00:23:24.198 "snapshot": false, 00:23:24.198 "clone": false, 00:23:24.198 "esnap_clone": false 00:23:24.198 } 00:23:24.198 } 00:23:24.198 } 00:23:24.198 ]' 00:23:24.198 11:34:05 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:24.198 11:34:05 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:23:24.198 11:34:05 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:24.198 11:34:05 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:23:24.198 11:34:05 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:23:24.199 11:34:05 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:23:24.199 11:34:05 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:23:24.199 11:34:05 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:23:24.456 11:34:05 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:23:24.456 11:34:05 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:23:24.456 11:34:05 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 88b346d1-c7fe-4cad-bdcc-fbc09c59d129 00:23:24.456 11:34:05 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=88b346d1-c7fe-4cad-bdcc-fbc09c59d129 00:23:24.457 11:34:05 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:24.457 11:34:05 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:23:24.457 11:34:05 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:23:24.457 11:34:05 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 88b346d1-c7fe-4cad-bdcc-fbc09c59d129 00:23:24.715 11:34:06 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:24.715 { 00:23:24.715 "name": "88b346d1-c7fe-4cad-bdcc-fbc09c59d129", 00:23:24.715 "aliases": [ 00:23:24.715 "lvs/nvme0n1p0" 00:23:24.715 ], 00:23:24.716 "product_name": "Logical Volume", 00:23:24.716 "block_size": 4096, 00:23:24.716 "num_blocks": 26476544, 00:23:24.716 "uuid": "88b346d1-c7fe-4cad-bdcc-fbc09c59d129", 00:23:24.716 "assigned_rate_limits": { 00:23:24.716 "rw_ios_per_sec": 0, 00:23:24.716 "rw_mbytes_per_sec": 0, 00:23:24.716 "r_mbytes_per_sec": 0, 00:23:24.716 "w_mbytes_per_sec": 0 00:23:24.716 }, 00:23:24.716 "claimed": false, 00:23:24.716 "zoned": false, 00:23:24.716 "supported_io_types": { 00:23:24.716 "read": true, 00:23:24.716 "write": true, 00:23:24.716 "unmap": true, 00:23:24.716 "flush": false, 00:23:24.716 "reset": true, 00:23:24.716 "nvme_admin": false, 00:23:24.716 "nvme_io": false, 00:23:24.716 "nvme_io_md": false, 00:23:24.716 "write_zeroes": true, 00:23:24.716 "zcopy": false, 00:23:24.716 "get_zone_info": false, 00:23:24.716 "zone_management": false, 00:23:24.716 "zone_append": false, 00:23:24.716 "compare": false, 00:23:24.716 "compare_and_write": false, 00:23:24.716 "abort": false, 00:23:24.716 "seek_hole": true, 00:23:24.716 "seek_data": true, 00:23:24.716 "copy": false, 00:23:24.716 "nvme_iov_md": false 00:23:24.716 }, 00:23:24.716 "driver_specific": { 00:23:24.716 "lvol": { 00:23:24.716 "lvol_store_uuid": "fcfc1214-167d-46e1-ab9f-0747f577ba37", 00:23:24.716 "base_bdev": "nvme0n1", 00:23:24.716 "thin_provision": true, 00:23:24.716 "num_allocated_clusters": 0, 00:23:24.716 "snapshot": false, 00:23:24.716 "clone": false, 00:23:24.716 "esnap_clone": false 00:23:24.716 } 00:23:24.716 } 00:23:24.716 } 00:23:24.716 ]' 00:23:24.716 11:34:06 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:24.716 11:34:06 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:23:24.716 11:34:06 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:24.716 11:34:06 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:23:24.716 11:34:06 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:23:24.716 11:34:06 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:23:24.716 11:34:06 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:23:24.716 11:34:06 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 88b346d1-c7fe-4cad-bdcc-fbc09c59d129 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:23:24.974 [2024-10-07 11:34:06.486512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.975 [2024-10-07 11:34:06.486754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:24.975 [2024-10-07 11:34:06.486788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:24.975 [2024-10-07 11:34:06.486806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.975 [2024-10-07 11:34:06.490143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.975 [2024-10-07 11:34:06.490183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:24.975 [2024-10-07 11:34:06.490198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.299 ms 00:23:24.975 [2024-10-07 11:34:06.490210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.975 [2024-10-07 11:34:06.490348] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:24.975 [2024-10-07 11:34:06.491313] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:24.975 [2024-10-07 11:34:06.491349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.975 [2024-10-07 11:34:06.491361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:24.975 [2024-10-07 11:34:06.491374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.014 ms 00:23:24.975 [2024-10-07 11:34:06.491387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.975 [2024-10-07 11:34:06.491500] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID c659e091-29a7-4fa5-b97b-7822cea8a8a4 00:23:24.975 [2024-10-07 11:34:06.492931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.975 [2024-10-07 11:34:06.493083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:23:24.975 [2024-10-07 11:34:06.493104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:23:24.975 [2024-10-07 11:34:06.493117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.975 [2024-10-07 11:34:06.500607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.975 [2024-10-07 11:34:06.500784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:24.975 [2024-10-07 11:34:06.500807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.409 ms 00:23:24.975 [2024-10-07 11:34:06.500821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.975 [2024-10-07 11:34:06.500983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.975 [2024-10-07 11:34:06.501001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:24.975 [2024-10-07 11:34:06.501012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:23:24.975 [2024-10-07 11:34:06.501029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.975 [2024-10-07 11:34:06.501071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.975 [2024-10-07 11:34:06.501085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:24.975 [2024-10-07 11:34:06.501096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:24.975 [2024-10-07 11:34:06.501109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.975 [2024-10-07 11:34:06.501146] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:23:24.975 [2024-10-07 11:34:06.506229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.975 [2024-10-07 11:34:06.506260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:24.975 [2024-10-07 11:34:06.506275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.094 ms 00:23:24.975 [2024-10-07 11:34:06.506292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.975 [2024-10-07 11:34:06.506359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.975 [2024-10-07 11:34:06.506372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:24.975 [2024-10-07 11:34:06.506385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:24.975 [2024-10-07 11:34:06.506398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.975 [2024-10-07 11:34:06.506436] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:23:24.975 [2024-10-07 11:34:06.506561] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:24.975 [2024-10-07 11:34:06.506581] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:24.975 [2024-10-07 11:34:06.506612] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:24.975 [2024-10-07 11:34:06.506632] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:24.975 [2024-10-07 11:34:06.506643] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:24.975 [2024-10-07 11:34:06.506658] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:23:24.975 [2024-10-07 11:34:06.506668] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:24.975 [2024-10-07 11:34:06.506681] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:24.975 [2024-10-07 11:34:06.506691] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:24.975 [2024-10-07 11:34:06.506704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.975 [2024-10-07 11:34:06.506714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:24.975 [2024-10-07 11:34:06.506727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:23:24.975 [2024-10-07 11:34:06.506756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.975 [2024-10-07 11:34:06.506847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.975 [2024-10-07 11:34:06.506862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:24.975 [2024-10-07 11:34:06.506875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:23:24.975 [2024-10-07 11:34:06.506885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.975 [2024-10-07 11:34:06.506997] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:24.975 [2024-10-07 11:34:06.507009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:24.975 [2024-10-07 11:34:06.507023] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:24.975 [2024-10-07 11:34:06.507033] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:24.975 [2024-10-07 11:34:06.507046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:24.975 [2024-10-07 11:34:06.507055] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:24.975 [2024-10-07 11:34:06.507067] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:23:24.975 [2024-10-07 11:34:06.507077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:24.975 [2024-10-07 11:34:06.507088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:23:24.975 [2024-10-07 11:34:06.507098] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:24.975 [2024-10-07 11:34:06.507110] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:24.975 [2024-10-07 11:34:06.507120] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:23:24.975 [2024-10-07 11:34:06.507132] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:24.975 [2024-10-07 11:34:06.507142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:24.975 [2024-10-07 11:34:06.507155] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:23:24.975 [2024-10-07 11:34:06.507165] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:24.975 [2024-10-07 11:34:06.507179] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:24.975 [2024-10-07 11:34:06.507188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:23:24.975 [2024-10-07 11:34:06.507200] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:24.975 [2024-10-07 11:34:06.507209] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:24.975 [2024-10-07 11:34:06.507221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:23:24.975 [2024-10-07 11:34:06.507230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:24.975 [2024-10-07 11:34:06.507243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:24.975 [2024-10-07 11:34:06.507253] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:23:24.975 [2024-10-07 11:34:06.507265] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:24.975 [2024-10-07 11:34:06.507274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:24.975 [2024-10-07 11:34:06.507285] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:23:24.975 [2024-10-07 11:34:06.507295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:24.975 [2024-10-07 11:34:06.507306] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:24.975 [2024-10-07 11:34:06.507316] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:23:24.975 [2024-10-07 11:34:06.507327] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:24.975 [2024-10-07 11:34:06.507337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:24.975 [2024-10-07 11:34:06.507350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:23:24.975 [2024-10-07 11:34:06.507359] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:24.975 [2024-10-07 11:34:06.507371] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:24.975 [2024-10-07 11:34:06.507380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:23:24.975 [2024-10-07 11:34:06.507391] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:24.975 [2024-10-07 11:34:06.507401] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:24.975 [2024-10-07 11:34:06.507412] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:23:24.975 [2024-10-07 11:34:06.507421] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:24.975 [2024-10-07 11:34:06.507432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:24.975 [2024-10-07 11:34:06.507442] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:23:24.975 [2024-10-07 11:34:06.507453] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:24.975 [2024-10-07 11:34:06.507462] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:24.975 [2024-10-07 11:34:06.507475] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:24.975 [2024-10-07 11:34:06.507488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:24.975 [2024-10-07 11:34:06.507502] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:24.975 [2024-10-07 11:34:06.507513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:24.975 [2024-10-07 11:34:06.507529] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:24.975 [2024-10-07 11:34:06.507542] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:24.975 [2024-10-07 11:34:06.507555] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:24.975 [2024-10-07 11:34:06.507565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:24.976 [2024-10-07 11:34:06.507576] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:24.976 [2024-10-07 11:34:06.507590] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:24.976 [2024-10-07 11:34:06.507615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:24.976 [2024-10-07 11:34:06.507627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:23:24.976 [2024-10-07 11:34:06.507642] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:23:24.976 [2024-10-07 11:34:06.507653] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:23:24.976 [2024-10-07 11:34:06.507668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:23:24.976 [2024-10-07 11:34:06.507679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:23:24.976 [2024-10-07 11:34:06.507694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:23:24.976 [2024-10-07 11:34:06.507705] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:23:24.976 [2024-10-07 11:34:06.507720] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:23:24.976 [2024-10-07 11:34:06.507731] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:23:24.976 [2024-10-07 11:34:06.507767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:23:24.976 [2024-10-07 11:34:06.507779] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:23:24.976 [2024-10-07 11:34:06.507794] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:23:24.976 [2024-10-07 11:34:06.507805] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:23:24.976 [2024-10-07 11:34:06.507818] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:23:24.976 [2024-10-07 11:34:06.507828] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:24.976 [2024-10-07 11:34:06.507842] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:24.976 [2024-10-07 11:34:06.507860] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:24.976 [2024-10-07 11:34:06.507875] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:24.976 [2024-10-07 11:34:06.507885] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:24.976 [2024-10-07 11:34:06.507898] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:24.976 [2024-10-07 11:34:06.507909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.976 [2024-10-07 11:34:06.507922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:24.976 [2024-10-07 11:34:06.507933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.971 ms 00:23:24.976 [2024-10-07 11:34:06.507945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.976 [2024-10-07 11:34:06.508032] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:23:24.976 [2024-10-07 11:34:06.508050] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:23:27.519 [2024-10-07 11:34:09.189799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.519 [2024-10-07 11:34:09.189877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:23:27.519 [2024-10-07 11:34:09.189896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2686.118 ms 00:23:27.519 [2024-10-07 11:34:09.189910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.778 [2024-10-07 11:34:09.239730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.778 [2024-10-07 11:34:09.239794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:27.778 [2024-10-07 11:34:09.239810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.585 ms 00:23:27.778 [2024-10-07 11:34:09.239824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.778 [2024-10-07 11:34:09.239981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.778 [2024-10-07 11:34:09.239998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:27.778 [2024-10-07 11:34:09.240010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:23:27.778 [2024-10-07 11:34:09.240025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.778 [2024-10-07 11:34:09.287975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.778 [2024-10-07 11:34:09.288031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:27.778 [2024-10-07 11:34:09.288046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.992 ms 00:23:27.778 [2024-10-07 11:34:09.288059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.778 [2024-10-07 11:34:09.288179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.778 [2024-10-07 11:34:09.288198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:27.778 [2024-10-07 11:34:09.288212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:27.778 [2024-10-07 11:34:09.288225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.778 [2024-10-07 11:34:09.288677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.778 [2024-10-07 11:34:09.288693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:27.778 [2024-10-07 11:34:09.288705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.419 ms 00:23:27.778 [2024-10-07 11:34:09.288717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.778 [2024-10-07 11:34:09.288847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.778 [2024-10-07 11:34:09.288862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:27.778 [2024-10-07 11:34:09.288873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:23:27.778 [2024-10-07 11:34:09.288891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.778 [2024-10-07 11:34:09.309937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.778 [2024-10-07 11:34:09.309989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:27.778 [2024-10-07 11:34:09.310007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.046 ms 00:23:27.778 [2024-10-07 11:34:09.310020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.778 [2024-10-07 11:34:09.322906] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:27.778 [2024-10-07 11:34:09.339643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.778 [2024-10-07 11:34:09.339697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:27.778 [2024-10-07 11:34:09.339717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.530 ms 00:23:27.778 [2024-10-07 11:34:09.339728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.778 [2024-10-07 11:34:09.421062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.778 [2024-10-07 11:34:09.421128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:23:27.778 [2024-10-07 11:34:09.421148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.322 ms 00:23:27.778 [2024-10-07 11:34:09.421159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.778 [2024-10-07 11:34:09.421409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.778 [2024-10-07 11:34:09.421423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:27.778 [2024-10-07 11:34:09.421443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:23:27.778 [2024-10-07 11:34:09.421454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.778 [2024-10-07 11:34:09.458409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.778 [2024-10-07 11:34:09.458459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:23:27.778 [2024-10-07 11:34:09.458479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.972 ms 00:23:27.778 [2024-10-07 11:34:09.458490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.038 [2024-10-07 11:34:09.496161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.038 [2024-10-07 11:34:09.496347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:23:28.038 [2024-10-07 11:34:09.496378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.632 ms 00:23:28.038 [2024-10-07 11:34:09.496389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.038 [2024-10-07 11:34:09.497340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.038 [2024-10-07 11:34:09.497370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:28.038 [2024-10-07 11:34:09.497385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.760 ms 00:23:28.038 [2024-10-07 11:34:09.497396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.038 [2024-10-07 11:34:09.603919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.038 [2024-10-07 11:34:09.604143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:23:28.038 [2024-10-07 11:34:09.604178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 106.641 ms 00:23:28.038 [2024-10-07 11:34:09.604190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.038 [2024-10-07 11:34:09.644018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.038 [2024-10-07 11:34:09.644078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:23:28.038 [2024-10-07 11:34:09.644098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.743 ms 00:23:28.038 [2024-10-07 11:34:09.644109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.038 [2024-10-07 11:34:09.682324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.038 [2024-10-07 11:34:09.682504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:23:28.038 [2024-10-07 11:34:09.682533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.182 ms 00:23:28.038 [2024-10-07 11:34:09.682544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.038 [2024-10-07 11:34:09.721777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.038 [2024-10-07 11:34:09.721949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:28.038 [2024-10-07 11:34:09.721994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.137 ms 00:23:28.038 [2024-10-07 11:34:09.722005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.038 [2024-10-07 11:34:09.722114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.038 [2024-10-07 11:34:09.722129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:28.038 [2024-10-07 11:34:09.722147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:28.038 [2024-10-07 11:34:09.722175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.038 [2024-10-07 11:34:09.722268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.038 [2024-10-07 11:34:09.722289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:28.038 [2024-10-07 11:34:09.722303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:23:28.038 [2024-10-07 11:34:09.722317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.038 [2024-10-07 11:34:09.723350] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:28.038 [2024-10-07 11:34:09.727957] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3241.795 ms, result 0 00:23:28.038 [2024-10-07 11:34:09.728825] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:28.038 { 00:23:28.038 "name": "ftl0", 00:23:28.038 "uuid": "c659e091-29a7-4fa5-b97b-7822cea8a8a4" 00:23:28.038 } 00:23:28.297 11:34:09 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:23:28.297 11:34:09 ftl.ftl_trim -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:23:28.297 11:34:09 ftl.ftl_trim -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:23:28.297 11:34:09 ftl.ftl_trim -- common/autotest_common.sh@901 -- # local i 00:23:28.297 11:34:09 ftl.ftl_trim -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:23:28.297 11:34:09 ftl.ftl_trim -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:23:28.297 11:34:09 ftl.ftl_trim -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:23:28.297 11:34:09 ftl.ftl_trim -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:23:28.557 [ 00:23:28.557 { 00:23:28.557 "name": "ftl0", 00:23:28.557 "aliases": [ 00:23:28.557 "c659e091-29a7-4fa5-b97b-7822cea8a8a4" 00:23:28.557 ], 00:23:28.557 "product_name": "FTL disk", 00:23:28.557 "block_size": 4096, 00:23:28.557 "num_blocks": 23592960, 00:23:28.557 "uuid": "c659e091-29a7-4fa5-b97b-7822cea8a8a4", 00:23:28.557 "assigned_rate_limits": { 00:23:28.557 "rw_ios_per_sec": 0, 00:23:28.557 "rw_mbytes_per_sec": 0, 00:23:28.557 "r_mbytes_per_sec": 0, 00:23:28.557 "w_mbytes_per_sec": 0 00:23:28.557 }, 00:23:28.557 "claimed": false, 00:23:28.557 "zoned": false, 00:23:28.557 "supported_io_types": { 00:23:28.557 "read": true, 00:23:28.557 "write": true, 00:23:28.557 "unmap": true, 00:23:28.557 "flush": true, 00:23:28.557 "reset": false, 00:23:28.557 "nvme_admin": false, 00:23:28.557 "nvme_io": false, 00:23:28.557 "nvme_io_md": false, 00:23:28.557 "write_zeroes": true, 00:23:28.557 "zcopy": false, 00:23:28.557 "get_zone_info": false, 00:23:28.557 "zone_management": false, 00:23:28.557 "zone_append": false, 00:23:28.557 "compare": false, 00:23:28.557 "compare_and_write": false, 00:23:28.557 "abort": false, 00:23:28.557 "seek_hole": false, 00:23:28.557 "seek_data": false, 00:23:28.557 "copy": false, 00:23:28.557 "nvme_iov_md": false 00:23:28.557 }, 00:23:28.557 "driver_specific": { 00:23:28.557 "ftl": { 00:23:28.557 "base_bdev": "88b346d1-c7fe-4cad-bdcc-fbc09c59d129", 00:23:28.557 "cache": "nvc0n1p0" 00:23:28.557 } 00:23:28.557 } 00:23:28.557 } 00:23:28.557 ] 00:23:28.557 11:34:10 ftl.ftl_trim -- common/autotest_common.sh@907 -- # return 0 00:23:28.557 11:34:10 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:23:28.557 11:34:10 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:23:28.816 11:34:10 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:23:28.816 11:34:10 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:23:29.075 11:34:10 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:23:29.075 { 00:23:29.075 "name": "ftl0", 00:23:29.075 "aliases": [ 00:23:29.075 "c659e091-29a7-4fa5-b97b-7822cea8a8a4" 00:23:29.075 ], 00:23:29.075 "product_name": "FTL disk", 00:23:29.075 "block_size": 4096, 00:23:29.075 "num_blocks": 23592960, 00:23:29.075 "uuid": "c659e091-29a7-4fa5-b97b-7822cea8a8a4", 00:23:29.075 "assigned_rate_limits": { 00:23:29.075 "rw_ios_per_sec": 0, 00:23:29.075 "rw_mbytes_per_sec": 0, 00:23:29.075 "r_mbytes_per_sec": 0, 00:23:29.075 "w_mbytes_per_sec": 0 00:23:29.075 }, 00:23:29.075 "claimed": false, 00:23:29.075 "zoned": false, 00:23:29.075 "supported_io_types": { 00:23:29.075 "read": true, 00:23:29.075 "write": true, 00:23:29.075 "unmap": true, 00:23:29.075 "flush": true, 00:23:29.075 "reset": false, 00:23:29.075 "nvme_admin": false, 00:23:29.075 "nvme_io": false, 00:23:29.075 "nvme_io_md": false, 00:23:29.075 "write_zeroes": true, 00:23:29.075 "zcopy": false, 00:23:29.075 "get_zone_info": false, 00:23:29.075 "zone_management": false, 00:23:29.075 "zone_append": false, 00:23:29.075 "compare": false, 00:23:29.075 "compare_and_write": false, 00:23:29.075 "abort": false, 00:23:29.075 "seek_hole": false, 00:23:29.075 "seek_data": false, 00:23:29.075 "copy": false, 00:23:29.075 "nvme_iov_md": false 00:23:29.075 }, 00:23:29.075 "driver_specific": { 00:23:29.075 "ftl": { 00:23:29.075 "base_bdev": "88b346d1-c7fe-4cad-bdcc-fbc09c59d129", 00:23:29.075 "cache": "nvc0n1p0" 00:23:29.075 } 00:23:29.075 } 00:23:29.075 } 00:23:29.075 ]' 00:23:29.075 11:34:10 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:23:29.075 11:34:10 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:23:29.075 11:34:10 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:23:29.335 [2024-10-07 11:34:10.841913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.335 [2024-10-07 11:34:10.842145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:29.335 [2024-10-07 11:34:10.842172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:29.335 [2024-10-07 11:34:10.842187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.335 [2024-10-07 11:34:10.842284] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:23:29.335 [2024-10-07 11:34:10.846576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.335 [2024-10-07 11:34:10.846608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:29.335 [2024-10-07 11:34:10.846627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.273 ms 00:23:29.335 [2024-10-07 11:34:10.846638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.335 [2024-10-07 11:34:10.847668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.335 [2024-10-07 11:34:10.847699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:29.335 [2024-10-07 11:34:10.847717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.927 ms 00:23:29.335 [2024-10-07 11:34:10.847728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.335 [2024-10-07 11:34:10.850597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.335 [2024-10-07 11:34:10.850621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:29.335 [2024-10-07 11:34:10.850635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.795 ms 00:23:29.335 [2024-10-07 11:34:10.850646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.335 [2024-10-07 11:34:10.856348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.335 [2024-10-07 11:34:10.856383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:29.335 [2024-10-07 11:34:10.856403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.632 ms 00:23:29.335 [2024-10-07 11:34:10.856414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.335 [2024-10-07 11:34:10.893959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.335 [2024-10-07 11:34:10.894005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:29.335 [2024-10-07 11:34:10.894026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.468 ms 00:23:29.335 [2024-10-07 11:34:10.894037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.335 [2024-10-07 11:34:10.916454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.335 [2024-10-07 11:34:10.916615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:29.335 [2024-10-07 11:34:10.916644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.326 ms 00:23:29.335 [2024-10-07 11:34:10.916655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.335 [2024-10-07 11:34:10.917033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.335 [2024-10-07 11:34:10.917048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:29.335 [2024-10-07 11:34:10.917062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.219 ms 00:23:29.335 [2024-10-07 11:34:10.917072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.335 [2024-10-07 11:34:10.954765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.335 [2024-10-07 11:34:10.954926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:29.335 [2024-10-07 11:34:10.954953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.693 ms 00:23:29.335 [2024-10-07 11:34:10.954964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.335 [2024-10-07 11:34:10.990975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.335 [2024-10-07 11:34:10.991018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:29.335 [2024-10-07 11:34:10.991040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.947 ms 00:23:29.335 [2024-10-07 11:34:10.991050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.335 [2024-10-07 11:34:11.027902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.335 [2024-10-07 11:34:11.027944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:29.335 [2024-10-07 11:34:11.027962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.791 ms 00:23:29.335 [2024-10-07 11:34:11.027972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.597 [2024-10-07 11:34:11.064577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.597 [2024-10-07 11:34:11.064620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:29.597 [2024-10-07 11:34:11.064637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.445 ms 00:23:29.597 [2024-10-07 11:34:11.064648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.597 [2024-10-07 11:34:11.064776] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:29.597 [2024-10-07 11:34:11.064795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:29.597 [2024-10-07 11:34:11.064811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:29.597 [2024-10-07 11:34:11.064824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:29.597 [2024-10-07 11:34:11.064838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:29.597 [2024-10-07 11:34:11.064849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:29.597 [2024-10-07 11:34:11.064866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:29.597 [2024-10-07 11:34:11.064877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:29.597 [2024-10-07 11:34:11.064891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:29.597 [2024-10-07 11:34:11.064903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:29.597 [2024-10-07 11:34:11.064917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:29.597 [2024-10-07 11:34:11.064927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:29.597 [2024-10-07 11:34:11.064941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:29.597 [2024-10-07 11:34:11.064952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:29.597 [2024-10-07 11:34:11.064966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:29.597 [2024-10-07 11:34:11.064977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:29.597 [2024-10-07 11:34:11.064990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:29.597 [2024-10-07 11:34:11.065000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:29.597 [2024-10-07 11:34:11.065013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:29.597 [2024-10-07 11:34:11.065024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:29.597 [2024-10-07 11:34:11.065038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:29.597 [2024-10-07 11:34:11.065049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:29.597 [2024-10-07 11:34:11.065066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:29.597 [2024-10-07 11:34:11.065077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:29.597 [2024-10-07 11:34:11.065091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:29.597 [2024-10-07 11:34:11.065102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:29.597 [2024-10-07 11:34:11.065135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:29.597 [2024-10-07 11:34:11.065146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:29.597 [2024-10-07 11:34:11.065159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:29.597 [2024-10-07 11:34:11.065170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:29.597 [2024-10-07 11:34:11.065183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:29.597 [2024-10-07 11:34:11.065198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:29.597 [2024-10-07 11:34:11.065211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:29.597 [2024-10-07 11:34:11.065222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:29.597 [2024-10-07 11:34:11.065235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:29.597 [2024-10-07 11:34:11.065246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:29.597 [2024-10-07 11:34:11.065260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:29.597 [2024-10-07 11:34:11.065271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:29.597 [2024-10-07 11:34:11.065286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:29.597 [2024-10-07 11:34:11.065297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:29.597 [2024-10-07 11:34:11.065310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:29.597 [2024-10-07 11:34:11.065320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:29.597 [2024-10-07 11:34:11.065333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:29.597 [2024-10-07 11:34:11.065344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:29.597 [2024-10-07 11:34:11.065357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:29.597 [2024-10-07 11:34:11.065368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:29.597 [2024-10-07 11:34:11.065382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:29.597 [2024-10-07 11:34:11.065393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:29.597 [2024-10-07 11:34:11.065406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:29.597 [2024-10-07 11:34:11.065417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:29.597 [2024-10-07 11:34:11.065431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:29.597 [2024-10-07 11:34:11.065442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:29.597 [2024-10-07 11:34:11.065455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:29.597 [2024-10-07 11:34:11.065466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:29.597 [2024-10-07 11:34:11.065482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:29.598 [2024-10-07 11:34:11.065493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:29.598 [2024-10-07 11:34:11.065506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:29.598 [2024-10-07 11:34:11.065517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:29.598 [2024-10-07 11:34:11.065530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:29.598 [2024-10-07 11:34:11.065541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:29.598 [2024-10-07 11:34:11.065554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:29.598 [2024-10-07 11:34:11.065566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:29.598 [2024-10-07 11:34:11.065579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:29.598 [2024-10-07 11:34:11.065592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:29.598 [2024-10-07 11:34:11.065606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:29.598 [2024-10-07 11:34:11.065618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:29.598 [2024-10-07 11:34:11.065631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:29.598 [2024-10-07 11:34:11.065642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:29.598 [2024-10-07 11:34:11.065655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:29.598 [2024-10-07 11:34:11.065666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:29.598 [2024-10-07 11:34:11.065681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:29.598 [2024-10-07 11:34:11.065692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:29.598 [2024-10-07 11:34:11.065707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:29.598 [2024-10-07 11:34:11.065717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:29.598 [2024-10-07 11:34:11.065730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:29.598 [2024-10-07 11:34:11.065751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:29.598 [2024-10-07 11:34:11.065765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:29.598 [2024-10-07 11:34:11.065776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:29.598 [2024-10-07 11:34:11.065788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:29.598 [2024-10-07 11:34:11.065799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:29.598 [2024-10-07 11:34:11.065813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:29.598 [2024-10-07 11:34:11.065824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:29.598 [2024-10-07 11:34:11.065837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:29.598 [2024-10-07 11:34:11.065848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:29.598 [2024-10-07 11:34:11.065862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:29.598 [2024-10-07 11:34:11.065872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:29.598 [2024-10-07 11:34:11.065888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:29.598 [2024-10-07 11:34:11.065899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:29.598 [2024-10-07 11:34:11.065912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:29.598 [2024-10-07 11:34:11.065923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:29.598 [2024-10-07 11:34:11.065936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:29.598 [2024-10-07 11:34:11.065947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:29.598 [2024-10-07 11:34:11.065960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:29.598 [2024-10-07 11:34:11.065971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:29.598 [2024-10-07 11:34:11.065985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:29.598 [2024-10-07 11:34:11.065996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:29.598 [2024-10-07 11:34:11.066009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:29.598 [2024-10-07 11:34:11.066021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:29.598 [2024-10-07 11:34:11.066034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:29.598 [2024-10-07 11:34:11.066045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:29.598 [2024-10-07 11:34:11.066060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:29.598 [2024-10-07 11:34:11.066079] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:29.598 [2024-10-07 11:34:11.066098] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c659e091-29a7-4fa5-b97b-7822cea8a8a4 00:23:29.598 [2024-10-07 11:34:11.066109] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:29.598 [2024-10-07 11:34:11.066122] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:29.598 [2024-10-07 11:34:11.066132] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:29.598 [2024-10-07 11:34:11.066145] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:29.598 [2024-10-07 11:34:11.066155] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:29.598 [2024-10-07 11:34:11.066168] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:29.598 [2024-10-07 11:34:11.066177] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:29.598 [2024-10-07 11:34:11.066189] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:29.598 [2024-10-07 11:34:11.066198] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:29.598 [2024-10-07 11:34:11.066213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.598 [2024-10-07 11:34:11.066224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:29.598 [2024-10-07 11:34:11.066237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.442 ms 00:23:29.598 [2024-10-07 11:34:11.066247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.598 [2024-10-07 11:34:11.086943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.598 [2024-10-07 11:34:11.086980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:29.598 [2024-10-07 11:34:11.087000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.663 ms 00:23:29.598 [2024-10-07 11:34:11.087010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.598 [2024-10-07 11:34:11.087601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.598 [2024-10-07 11:34:11.087618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:29.598 [2024-10-07 11:34:11.087636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.491 ms 00:23:29.598 [2024-10-07 11:34:11.087647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.598 [2024-10-07 11:34:11.159491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.598 [2024-10-07 11:34:11.159547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:29.598 [2024-10-07 11:34:11.159566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.598 [2024-10-07 11:34:11.159577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.598 [2024-10-07 11:34:11.159780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.598 [2024-10-07 11:34:11.159796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:29.598 [2024-10-07 11:34:11.159814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.598 [2024-10-07 11:34:11.159824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.598 [2024-10-07 11:34:11.159916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.598 [2024-10-07 11:34:11.159928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:29.598 [2024-10-07 11:34:11.159944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.598 [2024-10-07 11:34:11.159954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.598 [2024-10-07 11:34:11.160008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.598 [2024-10-07 11:34:11.160019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:29.598 [2024-10-07 11:34:11.160033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.598 [2024-10-07 11:34:11.160043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.598 [2024-10-07 11:34:11.294948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.598 [2024-10-07 11:34:11.295011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:29.598 [2024-10-07 11:34:11.295028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.598 [2024-10-07 11:34:11.295040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.867 [2024-10-07 11:34:11.398490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.867 [2024-10-07 11:34:11.398756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:29.867 [2024-10-07 11:34:11.398786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.867 [2024-10-07 11:34:11.398800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.867 [2024-10-07 11:34:11.398975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.867 [2024-10-07 11:34:11.398988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:29.867 [2024-10-07 11:34:11.399004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.867 [2024-10-07 11:34:11.399014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.867 [2024-10-07 11:34:11.399118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.867 [2024-10-07 11:34:11.399130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:29.867 [2024-10-07 11:34:11.399162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.867 [2024-10-07 11:34:11.399172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.867 [2024-10-07 11:34:11.399335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.867 [2024-10-07 11:34:11.399348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:29.867 [2024-10-07 11:34:11.399361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.867 [2024-10-07 11:34:11.399371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.867 [2024-10-07 11:34:11.399453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.867 [2024-10-07 11:34:11.399466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:29.867 [2024-10-07 11:34:11.399479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.867 [2024-10-07 11:34:11.399489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.868 [2024-10-07 11:34:11.399565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.868 [2024-10-07 11:34:11.399577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:29.868 [2024-10-07 11:34:11.399593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.868 [2024-10-07 11:34:11.399603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.868 [2024-10-07 11:34:11.399680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.868 [2024-10-07 11:34:11.399694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:29.868 [2024-10-07 11:34:11.399710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.868 [2024-10-07 11:34:11.399721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.868 [2024-10-07 11:34:11.400000] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 558.980 ms, result 0 00:23:29.868 true 00:23:29.868 11:34:11 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 76132 00:23:29.868 11:34:11 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 76132 ']' 00:23:29.868 11:34:11 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 76132 00:23:29.868 11:34:11 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:23:29.868 11:34:11 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:29.868 11:34:11 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76132 00:23:29.868 killing process with pid 76132 00:23:29.868 11:34:11 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:29.868 11:34:11 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:29.868 11:34:11 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76132' 00:23:29.868 11:34:11 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 76132 00:23:29.868 11:34:11 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 76132 00:23:35.138 11:34:15 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:23:35.396 65536+0 records in 00:23:35.396 65536+0 records out 00:23:35.396 268435456 bytes (268 MB, 256 MiB) copied, 1.04375 s, 257 MB/s 00:23:35.396 11:34:16 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:35.396 [2024-10-07 11:34:16.969758] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:23:35.396 [2024-10-07 11:34:16.969882] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76337 ] 00:23:35.654 [2024-10-07 11:34:17.141872] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:35.654 [2024-10-07 11:34:17.352539] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:23:36.222 [2024-10-07 11:34:17.694818] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:36.222 [2024-10-07 11:34:17.694888] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:36.222 [2024-10-07 11:34:17.856081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.222 [2024-10-07 11:34:17.856132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:36.222 [2024-10-07 11:34:17.856151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:36.222 [2024-10-07 11:34:17.856162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.222 [2024-10-07 11:34:17.859384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.222 [2024-10-07 11:34:17.859539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:36.222 [2024-10-07 11:34:17.859561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.205 ms 00:23:36.222 [2024-10-07 11:34:17.859572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.222 [2024-10-07 11:34:17.859681] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:36.222 [2024-10-07 11:34:17.860689] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:36.222 [2024-10-07 11:34:17.860722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.222 [2024-10-07 11:34:17.860734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:36.222 [2024-10-07 11:34:17.860763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.052 ms 00:23:36.222 [2024-10-07 11:34:17.860773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.222 [2024-10-07 11:34:17.862401] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:36.222 [2024-10-07 11:34:17.881812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.222 [2024-10-07 11:34:17.881849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:36.222 [2024-10-07 11:34:17.881863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.444 ms 00:23:36.222 [2024-10-07 11:34:17.881874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.222 [2024-10-07 11:34:17.881976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.222 [2024-10-07 11:34:17.881992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:36.222 [2024-10-07 11:34:17.882007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:23:36.222 [2024-10-07 11:34:17.882016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.222 [2024-10-07 11:34:17.888691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.222 [2024-10-07 11:34:17.888719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:36.222 [2024-10-07 11:34:17.888731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.642 ms 00:23:36.222 [2024-10-07 11:34:17.888751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.222 [2024-10-07 11:34:17.888852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.222 [2024-10-07 11:34:17.888888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:36.222 [2024-10-07 11:34:17.888899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:23:36.222 [2024-10-07 11:34:17.888909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.222 [2024-10-07 11:34:17.888940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.222 [2024-10-07 11:34:17.888951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:36.222 [2024-10-07 11:34:17.888962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:36.222 [2024-10-07 11:34:17.888972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.222 [2024-10-07 11:34:17.888996] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:23:36.222 [2024-10-07 11:34:17.893861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.222 [2024-10-07 11:34:17.893890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:36.222 [2024-10-07 11:34:17.893902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.880 ms 00:23:36.222 [2024-10-07 11:34:17.893912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.222 [2024-10-07 11:34:17.893983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.222 [2024-10-07 11:34:17.893999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:36.222 [2024-10-07 11:34:17.894011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:36.222 [2024-10-07 11:34:17.894020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.222 [2024-10-07 11:34:17.894044] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:36.222 [2024-10-07 11:34:17.894067] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:36.222 [2024-10-07 11:34:17.894103] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:36.222 [2024-10-07 11:34:17.894121] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:36.222 [2024-10-07 11:34:17.894213] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:36.222 [2024-10-07 11:34:17.894227] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:36.222 [2024-10-07 11:34:17.894240] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:36.222 [2024-10-07 11:34:17.894253] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:36.222 [2024-10-07 11:34:17.894265] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:36.222 [2024-10-07 11:34:17.894284] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:23:36.222 [2024-10-07 11:34:17.894295] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:36.222 [2024-10-07 11:34:17.894305] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:36.222 [2024-10-07 11:34:17.894315] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:36.222 [2024-10-07 11:34:17.894325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.222 [2024-10-07 11:34:17.894335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:36.222 [2024-10-07 11:34:17.894349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:23:36.222 [2024-10-07 11:34:17.894359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.222 [2024-10-07 11:34:17.894436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.222 [2024-10-07 11:34:17.894447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:36.222 [2024-10-07 11:34:17.894457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:23:36.222 [2024-10-07 11:34:17.894467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.222 [2024-10-07 11:34:17.894554] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:36.222 [2024-10-07 11:34:17.894566] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:36.222 [2024-10-07 11:34:17.894578] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:36.222 [2024-10-07 11:34:17.894592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.222 [2024-10-07 11:34:17.894602] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:36.222 [2024-10-07 11:34:17.894612] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:36.222 [2024-10-07 11:34:17.894621] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:23:36.222 [2024-10-07 11:34:17.894631] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:36.222 [2024-10-07 11:34:17.894641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:23:36.222 [2024-10-07 11:34:17.894650] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:36.222 [2024-10-07 11:34:17.894660] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:36.222 [2024-10-07 11:34:17.894681] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:23:36.222 [2024-10-07 11:34:17.894690] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:36.222 [2024-10-07 11:34:17.894700] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:36.222 [2024-10-07 11:34:17.894710] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:23:36.222 [2024-10-07 11:34:17.894720] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.222 [2024-10-07 11:34:17.894730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:36.222 [2024-10-07 11:34:17.894750] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:23:36.222 [2024-10-07 11:34:17.894760] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.222 [2024-10-07 11:34:17.894769] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:36.222 [2024-10-07 11:34:17.894778] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:23:36.222 [2024-10-07 11:34:17.894788] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:36.222 [2024-10-07 11:34:17.894797] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:36.222 [2024-10-07 11:34:17.894806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:23:36.222 [2024-10-07 11:34:17.894815] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:36.223 [2024-10-07 11:34:17.894824] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:36.223 [2024-10-07 11:34:17.894833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:23:36.223 [2024-10-07 11:34:17.894842] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:36.223 [2024-10-07 11:34:17.894851] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:36.223 [2024-10-07 11:34:17.894861] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:23:36.223 [2024-10-07 11:34:17.894869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:36.223 [2024-10-07 11:34:17.894879] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:36.223 [2024-10-07 11:34:17.894888] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:23:36.223 [2024-10-07 11:34:17.894897] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:36.223 [2024-10-07 11:34:17.894907] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:36.223 [2024-10-07 11:34:17.894916] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:23:36.223 [2024-10-07 11:34:17.894925] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:36.223 [2024-10-07 11:34:17.894934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:36.223 [2024-10-07 11:34:17.894943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:23:36.223 [2024-10-07 11:34:17.894951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.223 [2024-10-07 11:34:17.894960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:36.223 [2024-10-07 11:34:17.894969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:23:36.223 [2024-10-07 11:34:17.894979] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.223 [2024-10-07 11:34:17.894988] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:36.223 [2024-10-07 11:34:17.894998] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:36.223 [2024-10-07 11:34:17.895011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:36.223 [2024-10-07 11:34:17.895021] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.223 [2024-10-07 11:34:17.895031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:36.223 [2024-10-07 11:34:17.895041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:36.223 [2024-10-07 11:34:17.895050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:36.223 [2024-10-07 11:34:17.895059] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:36.223 [2024-10-07 11:34:17.895068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:36.223 [2024-10-07 11:34:17.895077] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:36.223 [2024-10-07 11:34:17.895088] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:36.223 [2024-10-07 11:34:17.895101] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:36.223 [2024-10-07 11:34:17.895116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:23:36.223 [2024-10-07 11:34:17.895126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:23:36.223 [2024-10-07 11:34:17.895137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:23:36.223 [2024-10-07 11:34:17.895148] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:23:36.223 [2024-10-07 11:34:17.895158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:23:36.223 [2024-10-07 11:34:17.895168] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:23:36.223 [2024-10-07 11:34:17.895179] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:23:36.223 [2024-10-07 11:34:17.895191] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:23:36.223 [2024-10-07 11:34:17.895201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:23:36.223 [2024-10-07 11:34:17.895211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:23:36.223 [2024-10-07 11:34:17.895222] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:23:36.223 [2024-10-07 11:34:17.895232] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:23:36.223 [2024-10-07 11:34:17.895242] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:23:36.223 [2024-10-07 11:34:17.895252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:23:36.223 [2024-10-07 11:34:17.895262] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:36.223 [2024-10-07 11:34:17.895273] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:36.223 [2024-10-07 11:34:17.895285] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:36.223 [2024-10-07 11:34:17.895295] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:36.223 [2024-10-07 11:34:17.895305] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:36.223 [2024-10-07 11:34:17.895315] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:36.223 [2024-10-07 11:34:17.895326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.223 [2024-10-07 11:34:17.895339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:36.223 [2024-10-07 11:34:17.895350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.827 ms 00:23:36.223 [2024-10-07 11:34:17.895360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.482 [2024-10-07 11:34:17.945183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.482 [2024-10-07 11:34:17.945355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:36.482 [2024-10-07 11:34:17.945444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.847 ms 00:23:36.482 [2024-10-07 11:34:17.945481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.482 [2024-10-07 11:34:17.945656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.482 [2024-10-07 11:34:17.945781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:36.482 [2024-10-07 11:34:17.945859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:23:36.482 [2024-10-07 11:34:17.945891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.482 [2024-10-07 11:34:17.991306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.482 [2024-10-07 11:34:17.991461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:36.482 [2024-10-07 11:34:17.991589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.439 ms 00:23:36.482 [2024-10-07 11:34:17.991628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.482 [2024-10-07 11:34:17.991779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.482 [2024-10-07 11:34:17.991882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:36.482 [2024-10-07 11:34:17.991921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:36.482 [2024-10-07 11:34:17.991951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.482 [2024-10-07 11:34:17.992465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.482 [2024-10-07 11:34:17.992562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:36.482 [2024-10-07 11:34:17.992677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.418 ms 00:23:36.482 [2024-10-07 11:34:17.992717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.482 [2024-10-07 11:34:17.992888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.482 [2024-10-07 11:34:17.992927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:36.482 [2024-10-07 11:34:17.993000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:23:36.482 [2024-10-07 11:34:17.993035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.482 [2024-10-07 11:34:18.012951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.482 [2024-10-07 11:34:18.013091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:36.482 [2024-10-07 11:34:18.013179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.899 ms 00:23:36.482 [2024-10-07 11:34:18.013218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.482 [2024-10-07 11:34:18.032624] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:23:36.482 [2024-10-07 11:34:18.032795] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:36.482 [2024-10-07 11:34:18.032901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.482 [2024-10-07 11:34:18.032936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:36.482 [2024-10-07 11:34:18.032967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.521 ms 00:23:36.482 [2024-10-07 11:34:18.032997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.482 [2024-10-07 11:34:18.062436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.482 [2024-10-07 11:34:18.062569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:36.482 [2024-10-07 11:34:18.062642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.388 ms 00:23:36.482 [2024-10-07 11:34:18.062685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.482 [2024-10-07 11:34:18.080454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.482 [2024-10-07 11:34:18.080597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:36.482 [2024-10-07 11:34:18.080768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.692 ms 00:23:36.482 [2024-10-07 11:34:18.080806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.482 [2024-10-07 11:34:18.098862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.482 [2024-10-07 11:34:18.098991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:36.482 [2024-10-07 11:34:18.099158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.986 ms 00:23:36.482 [2024-10-07 11:34:18.099193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.482 [2024-10-07 11:34:18.099996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.482 [2024-10-07 11:34:18.100118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:36.482 [2024-10-07 11:34:18.100195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.639 ms 00:23:36.483 [2024-10-07 11:34:18.100230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.483 [2024-10-07 11:34:18.185629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.483 [2024-10-07 11:34:18.185879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:36.483 [2024-10-07 11:34:18.186007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.482 ms 00:23:36.483 [2024-10-07 11:34:18.186027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.785 [2024-10-07 11:34:18.197039] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:36.785 [2024-10-07 11:34:18.213338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.785 [2024-10-07 11:34:18.213400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:36.785 [2024-10-07 11:34:18.213416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.224 ms 00:23:36.785 [2024-10-07 11:34:18.213427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.785 [2024-10-07 11:34:18.213570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.785 [2024-10-07 11:34:18.213583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:36.785 [2024-10-07 11:34:18.213595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:36.785 [2024-10-07 11:34:18.213606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.785 [2024-10-07 11:34:18.213667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.785 [2024-10-07 11:34:18.213684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:36.785 [2024-10-07 11:34:18.213699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:23:36.785 [2024-10-07 11:34:18.213709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.785 [2024-10-07 11:34:18.213736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.785 [2024-10-07 11:34:18.213768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:36.785 [2024-10-07 11:34:18.213779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:36.785 [2024-10-07 11:34:18.213789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.785 [2024-10-07 11:34:18.213824] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:36.785 [2024-10-07 11:34:18.213838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.785 [2024-10-07 11:34:18.213848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:36.785 [2024-10-07 11:34:18.213858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:23:36.785 [2024-10-07 11:34:18.213872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.785 [2024-10-07 11:34:18.250679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.785 [2024-10-07 11:34:18.250729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:36.785 [2024-10-07 11:34:18.250754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.840 ms 00:23:36.785 [2024-10-07 11:34:18.250767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.785 [2024-10-07 11:34:18.250902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.785 [2024-10-07 11:34:18.250918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:36.785 [2024-10-07 11:34:18.250933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:23:36.785 [2024-10-07 11:34:18.250943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.785 [2024-10-07 11:34:18.251876] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:36.785 [2024-10-07 11:34:18.256245] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 396.106 ms, result 0 00:23:36.785 [2024-10-07 11:34:18.257099] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:36.785 [2024-10-07 11:34:18.275774] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:37.718  [2024-10-07T11:34:20.362Z] Copying: 25/256 [MB] (25 MBps) [2024-10-07T11:34:21.296Z] Copying: 52/256 [MB] (26 MBps) [2024-10-07T11:34:22.670Z] Copying: 78/256 [MB] (25 MBps) [2024-10-07T11:34:23.605Z] Copying: 104/256 [MB] (25 MBps) [2024-10-07T11:34:24.540Z] Copying: 129/256 [MB] (25 MBps) [2024-10-07T11:34:25.475Z] Copying: 155/256 [MB] (25 MBps) [2024-10-07T11:34:26.411Z] Copying: 182/256 [MB] (26 MBps) [2024-10-07T11:34:27.346Z] Copying: 209/256 [MB] (27 MBps) [2024-10-07T11:34:28.281Z] Copying: 236/256 [MB] (27 MBps) [2024-10-07T11:34:28.281Z] Copying: 256/256 [MB] (average 26 MBps)[2024-10-07 11:34:28.016808] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:46.570 [2024-10-07 11:34:28.031578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.570 [2024-10-07 11:34:28.031806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:46.570 [2024-10-07 11:34:28.031832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:46.570 [2024-10-07 11:34:28.031844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.570 [2024-10-07 11:34:28.031883] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:23:46.570 [2024-10-07 11:34:28.036013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.570 [2024-10-07 11:34:28.036044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:46.570 [2024-10-07 11:34:28.036056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.119 ms 00:23:46.570 [2024-10-07 11:34:28.036067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.570 [2024-10-07 11:34:28.038115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.570 [2024-10-07 11:34:28.038154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:46.570 [2024-10-07 11:34:28.038168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.028 ms 00:23:46.570 [2024-10-07 11:34:28.038184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.570 [2024-10-07 11:34:28.044947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.570 [2024-10-07 11:34:28.044986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:46.571 [2024-10-07 11:34:28.045000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.754 ms 00:23:46.571 [2024-10-07 11:34:28.045012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.571 [2024-10-07 11:34:28.050672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.571 [2024-10-07 11:34:28.050706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:46.571 [2024-10-07 11:34:28.050719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.630 ms 00:23:46.571 [2024-10-07 11:34:28.050736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.571 [2024-10-07 11:34:28.087399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.571 [2024-10-07 11:34:28.087448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:46.571 [2024-10-07 11:34:28.087463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.637 ms 00:23:46.571 [2024-10-07 11:34:28.087474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.571 [2024-10-07 11:34:28.108209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.571 [2024-10-07 11:34:28.108267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:46.571 [2024-10-07 11:34:28.108284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.704 ms 00:23:46.571 [2024-10-07 11:34:28.108295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.571 [2024-10-07 11:34:28.108484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.571 [2024-10-07 11:34:28.108498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:46.571 [2024-10-07 11:34:28.108509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:23:46.571 [2024-10-07 11:34:28.108520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.571 [2024-10-07 11:34:28.146034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.571 [2024-10-07 11:34:28.146109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:46.571 [2024-10-07 11:34:28.146125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.554 ms 00:23:46.571 [2024-10-07 11:34:28.146136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.571 [2024-10-07 11:34:28.183193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.571 [2024-10-07 11:34:28.183236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:46.571 [2024-10-07 11:34:28.183250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.048 ms 00:23:46.571 [2024-10-07 11:34:28.183261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.571 [2024-10-07 11:34:28.220044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.571 [2024-10-07 11:34:28.220109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:46.571 [2024-10-07 11:34:28.220126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.780 ms 00:23:46.571 [2024-10-07 11:34:28.220137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.571 [2024-10-07 11:34:28.258394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.571 [2024-10-07 11:34:28.258447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:46.571 [2024-10-07 11:34:28.258463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.181 ms 00:23:46.571 [2024-10-07 11:34:28.258474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.571 [2024-10-07 11:34:28.258550] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:46.571 [2024-10-07 11:34:28.258571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:46.571 [2024-10-07 11:34:28.258584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:46.571 [2024-10-07 11:34:28.258595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:46.571 [2024-10-07 11:34:28.258607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:46.571 [2024-10-07 11:34:28.258618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:46.571 [2024-10-07 11:34:28.258628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:46.571 [2024-10-07 11:34:28.258639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:46.571 [2024-10-07 11:34:28.258649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:46.571 [2024-10-07 11:34:28.258660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:46.571 [2024-10-07 11:34:28.258671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:46.571 [2024-10-07 11:34:28.258682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:46.571 [2024-10-07 11:34:28.258693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:46.571 [2024-10-07 11:34:28.258704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:46.571 [2024-10-07 11:34:28.258715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:46.571 [2024-10-07 11:34:28.258726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:46.571 [2024-10-07 11:34:28.258737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:46.571 [2024-10-07 11:34:28.258763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:46.571 [2024-10-07 11:34:28.258774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:46.571 [2024-10-07 11:34:28.258799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:46.571 [2024-10-07 11:34:28.258810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:46.571 [2024-10-07 11:34:28.258821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:46.571 [2024-10-07 11:34:28.258831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:46.571 [2024-10-07 11:34:28.258842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:46.571 [2024-10-07 11:34:28.258853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:46.571 [2024-10-07 11:34:28.258865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:46.571 [2024-10-07 11:34:28.258875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:46.571 [2024-10-07 11:34:28.258887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:46.571 [2024-10-07 11:34:28.258898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:46.571 [2024-10-07 11:34:28.258911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:46.571 [2024-10-07 11:34:28.258922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:46.571 [2024-10-07 11:34:28.258934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:46.571 [2024-10-07 11:34:28.258945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:46.571 [2024-10-07 11:34:28.258956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:46.571 [2024-10-07 11:34:28.258966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:46.571 [2024-10-07 11:34:28.258977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:46.571 [2024-10-07 11:34:28.258988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:46.571 [2024-10-07 11:34:28.258999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:46.571 [2024-10-07 11:34:28.259010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:46.571 [2024-10-07 11:34:28.259020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:46.571 [2024-10-07 11:34:28.259031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:46.571 [2024-10-07 11:34:28.259042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:46.571 [2024-10-07 11:34:28.259053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:46.571 [2024-10-07 11:34:28.259063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:46.571 [2024-10-07 11:34:28.259074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:46.571 [2024-10-07 11:34:28.259084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:46.571 [2024-10-07 11:34:28.259095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:46.571 [2024-10-07 11:34:28.259106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:46.571 [2024-10-07 11:34:28.259117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:46.571 [2024-10-07 11:34:28.259128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:46.571 [2024-10-07 11:34:28.259138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:46.571 [2024-10-07 11:34:28.259149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:46.571 [2024-10-07 11:34:28.259160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:46.571 [2024-10-07 11:34:28.259170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:46.571 [2024-10-07 11:34:28.259180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:46.571 [2024-10-07 11:34:28.259190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:46.571 [2024-10-07 11:34:28.259201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:46.571 [2024-10-07 11:34:28.259212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:46.571 [2024-10-07 11:34:28.259223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:46.571 [2024-10-07 11:34:28.259233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:46.571 [2024-10-07 11:34:28.259243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:46.571 [2024-10-07 11:34:28.259254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:46.571 [2024-10-07 11:34:28.259265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:46.571 [2024-10-07 11:34:28.259276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:46.571 [2024-10-07 11:34:28.259287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:46.571 [2024-10-07 11:34:28.259297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:46.571 [2024-10-07 11:34:28.259307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:46.571 [2024-10-07 11:34:28.259318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:46.572 [2024-10-07 11:34:28.259329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:46.572 [2024-10-07 11:34:28.259340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:46.572 [2024-10-07 11:34:28.259351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:46.572 [2024-10-07 11:34:28.259362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:46.572 [2024-10-07 11:34:28.259372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:46.572 [2024-10-07 11:34:28.259383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:46.572 [2024-10-07 11:34:28.259393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:46.572 [2024-10-07 11:34:28.259404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:46.572 [2024-10-07 11:34:28.259414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:46.572 [2024-10-07 11:34:28.259425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:46.572 [2024-10-07 11:34:28.259435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:46.572 [2024-10-07 11:34:28.259446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:46.572 [2024-10-07 11:34:28.259456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:46.572 [2024-10-07 11:34:28.259466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:46.572 [2024-10-07 11:34:28.259477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:46.572 [2024-10-07 11:34:28.259488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:46.572 [2024-10-07 11:34:28.259498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:46.572 [2024-10-07 11:34:28.259509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:46.572 [2024-10-07 11:34:28.259519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:46.572 [2024-10-07 11:34:28.259530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:46.572 [2024-10-07 11:34:28.259541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:46.572 [2024-10-07 11:34:28.259551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:46.572 [2024-10-07 11:34:28.259561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:46.572 [2024-10-07 11:34:28.259572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:46.572 [2024-10-07 11:34:28.259582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:46.572 [2024-10-07 11:34:28.259593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:46.572 [2024-10-07 11:34:28.259603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:46.572 [2024-10-07 11:34:28.259614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:46.572 [2024-10-07 11:34:28.259625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:46.572 [2024-10-07 11:34:28.259636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:46.572 [2024-10-07 11:34:28.259647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:46.572 [2024-10-07 11:34:28.259657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:46.572 [2024-10-07 11:34:28.259667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:46.572 [2024-10-07 11:34:28.259699] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:46.572 [2024-10-07 11:34:28.259709] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c659e091-29a7-4fa5-b97b-7822cea8a8a4 00:23:46.572 [2024-10-07 11:34:28.259721] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:46.572 [2024-10-07 11:34:28.259731] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:46.572 [2024-10-07 11:34:28.259750] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:46.572 [2024-10-07 11:34:28.259761] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:46.572 [2024-10-07 11:34:28.259775] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:46.572 [2024-10-07 11:34:28.259785] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:46.572 [2024-10-07 11:34:28.259795] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:46.572 [2024-10-07 11:34:28.259804] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:46.572 [2024-10-07 11:34:28.259813] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:46.572 [2024-10-07 11:34:28.259824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.572 [2024-10-07 11:34:28.259834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:46.572 [2024-10-07 11:34:28.259845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.277 ms 00:23:46.572 [2024-10-07 11:34:28.259855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.831 [2024-10-07 11:34:28.280555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.831 [2024-10-07 11:34:28.280597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:46.831 [2024-10-07 11:34:28.280617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.710 ms 00:23:46.831 [2024-10-07 11:34:28.280628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.831 [2024-10-07 11:34:28.281220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.831 [2024-10-07 11:34:28.281242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:46.831 [2024-10-07 11:34:28.281253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.539 ms 00:23:46.831 [2024-10-07 11:34:28.281264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.831 [2024-10-07 11:34:28.330918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:46.831 [2024-10-07 11:34:28.330968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:46.831 [2024-10-07 11:34:28.330983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:46.831 [2024-10-07 11:34:28.330993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.831 [2024-10-07 11:34:28.331104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:46.831 [2024-10-07 11:34:28.331117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:46.831 [2024-10-07 11:34:28.331127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:46.831 [2024-10-07 11:34:28.331138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.831 [2024-10-07 11:34:28.331190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:46.831 [2024-10-07 11:34:28.331205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:46.831 [2024-10-07 11:34:28.331220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:46.831 [2024-10-07 11:34:28.331230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.831 [2024-10-07 11:34:28.331249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:46.831 [2024-10-07 11:34:28.331260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:46.832 [2024-10-07 11:34:28.331270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:46.832 [2024-10-07 11:34:28.331281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.832 [2024-10-07 11:34:28.458844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:46.832 [2024-10-07 11:34:28.458906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:46.832 [2024-10-07 11:34:28.458928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:46.832 [2024-10-07 11:34:28.458939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.089 [2024-10-07 11:34:28.562404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:47.089 [2024-10-07 11:34:28.562620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:47.089 [2024-10-07 11:34:28.562645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:47.089 [2024-10-07 11:34:28.562658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.089 [2024-10-07 11:34:28.562776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:47.089 [2024-10-07 11:34:28.562789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:47.089 [2024-10-07 11:34:28.562800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:47.089 [2024-10-07 11:34:28.562810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.089 [2024-10-07 11:34:28.562846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:47.089 [2024-10-07 11:34:28.562858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:47.089 [2024-10-07 11:34:28.562868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:47.089 [2024-10-07 11:34:28.562878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.089 [2024-10-07 11:34:28.562990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:47.089 [2024-10-07 11:34:28.563003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:47.089 [2024-10-07 11:34:28.563014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:47.089 [2024-10-07 11:34:28.563024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.089 [2024-10-07 11:34:28.563068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:47.089 [2024-10-07 11:34:28.563080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:47.089 [2024-10-07 11:34:28.563090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:47.089 [2024-10-07 11:34:28.563101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.089 [2024-10-07 11:34:28.563142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:47.089 [2024-10-07 11:34:28.563153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:47.089 [2024-10-07 11:34:28.563163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:47.089 [2024-10-07 11:34:28.563173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.089 [2024-10-07 11:34:28.563223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:47.089 [2024-10-07 11:34:28.563235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:47.089 [2024-10-07 11:34:28.563246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:47.089 [2024-10-07 11:34:28.563256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.089 [2024-10-07 11:34:28.563404] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 532.682 ms, result 0 00:23:48.464 00:23:48.464 00:23:48.464 11:34:29 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=76473 00:23:48.464 11:34:29 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:23:48.464 11:34:29 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 76473 00:23:48.464 11:34:29 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 76473 ']' 00:23:48.464 11:34:29 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:48.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:48.464 11:34:29 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:48.464 11:34:29 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:48.464 11:34:29 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:48.464 11:34:29 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:23:48.464 [2024-10-07 11:34:30.063109] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:23:48.464 [2024-10-07 11:34:30.063241] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76473 ] 00:23:48.722 [2024-10-07 11:34:30.225829] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:48.994 [2024-10-07 11:34:30.451282] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:23:49.949 11:34:31 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:49.949 11:34:31 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:23:49.949 11:34:31 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:23:49.949 [2024-10-07 11:34:31.566898] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:49.949 [2024-10-07 11:34:31.566973] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:50.209 [2024-10-07 11:34:31.745168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.209 [2024-10-07 11:34:31.745224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:50.209 [2024-10-07 11:34:31.745243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:50.209 [2024-10-07 11:34:31.745254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.209 [2024-10-07 11:34:31.748385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.209 [2024-10-07 11:34:31.748535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:50.209 [2024-10-07 11:34:31.748561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.108 ms 00:23:50.209 [2024-10-07 11:34:31.748574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.209 [2024-10-07 11:34:31.748690] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:50.209 [2024-10-07 11:34:31.749669] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:50.209 [2024-10-07 11:34:31.749708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.209 [2024-10-07 11:34:31.749719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:50.209 [2024-10-07 11:34:31.749732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.030 ms 00:23:50.209 [2024-10-07 11:34:31.749755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.209 [2024-10-07 11:34:31.751237] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:50.209 [2024-10-07 11:34:31.769998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.209 [2024-10-07 11:34:31.770052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:50.209 [2024-10-07 11:34:31.770070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.793 ms 00:23:50.209 [2024-10-07 11:34:31.770083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.209 [2024-10-07 11:34:31.770218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.209 [2024-10-07 11:34:31.770257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:50.209 [2024-10-07 11:34:31.770276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:23:50.209 [2024-10-07 11:34:31.770306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.209 [2024-10-07 11:34:31.777315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.209 [2024-10-07 11:34:31.777362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:50.209 [2024-10-07 11:34:31.777376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.965 ms 00:23:50.209 [2024-10-07 11:34:31.777390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.209 [2024-10-07 11:34:31.777524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.209 [2024-10-07 11:34:31.777541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:50.209 [2024-10-07 11:34:31.777552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:23:50.209 [2024-10-07 11:34:31.777565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.209 [2024-10-07 11:34:31.777596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.209 [2024-10-07 11:34:31.777611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:50.209 [2024-10-07 11:34:31.777622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:50.209 [2024-10-07 11:34:31.777634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.209 [2024-10-07 11:34:31.777662] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:23:50.209 [2024-10-07 11:34:31.782633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.209 [2024-10-07 11:34:31.782667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:50.209 [2024-10-07 11:34:31.782682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.982 ms 00:23:50.209 [2024-10-07 11:34:31.782695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.209 [2024-10-07 11:34:31.782792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.209 [2024-10-07 11:34:31.782822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:50.209 [2024-10-07 11:34:31.782837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:50.209 [2024-10-07 11:34:31.782847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.209 [2024-10-07 11:34:31.782874] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:50.209 [2024-10-07 11:34:31.782895] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:50.209 [2024-10-07 11:34:31.782943] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:50.209 [2024-10-07 11:34:31.782967] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:50.209 [2024-10-07 11:34:31.783061] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:50.209 [2024-10-07 11:34:31.783074] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:50.209 [2024-10-07 11:34:31.783093] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:50.209 [2024-10-07 11:34:31.783105] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:50.209 [2024-10-07 11:34:31.783120] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:50.209 [2024-10-07 11:34:31.783132] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:23:50.209 [2024-10-07 11:34:31.783144] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:50.209 [2024-10-07 11:34:31.783154] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:50.209 [2024-10-07 11:34:31.783169] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:50.209 [2024-10-07 11:34:31.783182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.209 [2024-10-07 11:34:31.783195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:50.209 [2024-10-07 11:34:31.783206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 00:23:50.209 [2024-10-07 11:34:31.783219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.209 [2024-10-07 11:34:31.783295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.209 [2024-10-07 11:34:31.783309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:50.209 [2024-10-07 11:34:31.783319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:23:50.209 [2024-10-07 11:34:31.783332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.209 [2024-10-07 11:34:31.783423] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:50.209 [2024-10-07 11:34:31.783447] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:50.209 [2024-10-07 11:34:31.783458] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:50.209 [2024-10-07 11:34:31.783471] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:50.209 [2024-10-07 11:34:31.783481] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:50.209 [2024-10-07 11:34:31.783493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:50.209 [2024-10-07 11:34:31.783503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:23:50.209 [2024-10-07 11:34:31.783519] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:50.209 [2024-10-07 11:34:31.783529] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:23:50.209 [2024-10-07 11:34:31.783567] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:50.209 [2024-10-07 11:34:31.783577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:50.209 [2024-10-07 11:34:31.783590] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:23:50.209 [2024-10-07 11:34:31.783600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:50.209 [2024-10-07 11:34:31.783612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:50.209 [2024-10-07 11:34:31.783622] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:23:50.209 [2024-10-07 11:34:31.783634] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:50.210 [2024-10-07 11:34:31.783643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:50.210 [2024-10-07 11:34:31.783655] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:23:50.210 [2024-10-07 11:34:31.783674] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:50.210 [2024-10-07 11:34:31.783687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:50.210 [2024-10-07 11:34:31.783696] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:23:50.210 [2024-10-07 11:34:31.783708] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:50.210 [2024-10-07 11:34:31.783717] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:50.210 [2024-10-07 11:34:31.783731] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:23:50.210 [2024-10-07 11:34:31.783751] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:50.210 [2024-10-07 11:34:31.783763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:50.210 [2024-10-07 11:34:31.783773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:23:50.210 [2024-10-07 11:34:31.783784] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:50.210 [2024-10-07 11:34:31.783793] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:50.210 [2024-10-07 11:34:31.783806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:23:50.210 [2024-10-07 11:34:31.783815] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:50.210 [2024-10-07 11:34:31.783826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:50.210 [2024-10-07 11:34:31.783836] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:23:50.210 [2024-10-07 11:34:31.783849] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:50.210 [2024-10-07 11:34:31.783859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:50.210 [2024-10-07 11:34:31.783871] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:23:50.210 [2024-10-07 11:34:31.783879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:50.210 [2024-10-07 11:34:31.783892] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:50.210 [2024-10-07 11:34:31.783901] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:23:50.210 [2024-10-07 11:34:31.783916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:50.210 [2024-10-07 11:34:31.783925] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:50.210 [2024-10-07 11:34:31.783937] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:23:50.210 [2024-10-07 11:34:31.783946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:50.210 [2024-10-07 11:34:31.783959] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:50.210 [2024-10-07 11:34:31.783969] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:50.210 [2024-10-07 11:34:31.783982] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:50.210 [2024-10-07 11:34:31.783991] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:50.210 [2024-10-07 11:34:31.784004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:50.210 [2024-10-07 11:34:31.784014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:50.210 [2024-10-07 11:34:31.784025] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:50.210 [2024-10-07 11:34:31.784035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:50.210 [2024-10-07 11:34:31.784047] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:50.210 [2024-10-07 11:34:31.784056] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:50.210 [2024-10-07 11:34:31.784069] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:50.210 [2024-10-07 11:34:31.784082] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:50.210 [2024-10-07 11:34:31.784099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:23:50.210 [2024-10-07 11:34:31.784110] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:23:50.210 [2024-10-07 11:34:31.784123] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:23:50.210 [2024-10-07 11:34:31.784134] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:23:50.210 [2024-10-07 11:34:31.784149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:23:50.210 [2024-10-07 11:34:31.784159] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:23:50.210 [2024-10-07 11:34:31.784173] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:23:50.210 [2024-10-07 11:34:31.784183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:23:50.210 [2024-10-07 11:34:31.784196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:23:50.210 [2024-10-07 11:34:31.784206] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:23:50.210 [2024-10-07 11:34:31.784219] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:23:50.210 [2024-10-07 11:34:31.784230] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:23:50.210 [2024-10-07 11:34:31.784243] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:23:50.210 [2024-10-07 11:34:31.784253] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:23:50.210 [2024-10-07 11:34:31.784266] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:50.210 [2024-10-07 11:34:31.784277] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:50.210 [2024-10-07 11:34:31.784297] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:50.210 [2024-10-07 11:34:31.784308] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:50.210 [2024-10-07 11:34:31.784321] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:50.210 [2024-10-07 11:34:31.784331] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:50.210 [2024-10-07 11:34:31.784345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.210 [2024-10-07 11:34:31.784356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:50.210 [2024-10-07 11:34:31.784369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.975 ms 00:23:50.210 [2024-10-07 11:34:31.784379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.210 [2024-10-07 11:34:31.827155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.210 [2024-10-07 11:34:31.827209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:50.210 [2024-10-07 11:34:31.827229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.776 ms 00:23:50.210 [2024-10-07 11:34:31.827240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.210 [2024-10-07 11:34:31.827414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.210 [2024-10-07 11:34:31.827427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:50.210 [2024-10-07 11:34:31.827441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:23:50.210 [2024-10-07 11:34:31.827460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.210 [2024-10-07 11:34:31.886070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.210 [2024-10-07 11:34:31.886130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:50.210 [2024-10-07 11:34:31.886153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.668 ms 00:23:50.210 [2024-10-07 11:34:31.886164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.210 [2024-10-07 11:34:31.886314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.210 [2024-10-07 11:34:31.886328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:50.210 [2024-10-07 11:34:31.886344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:50.210 [2024-10-07 11:34:31.886358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.210 [2024-10-07 11:34:31.886821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.210 [2024-10-07 11:34:31.886836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:50.210 [2024-10-07 11:34:31.886851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.438 ms 00:23:50.210 [2024-10-07 11:34:31.886861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.210 [2024-10-07 11:34:31.886991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.210 [2024-10-07 11:34:31.887005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:50.210 [2024-10-07 11:34:31.887018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:23:50.210 [2024-10-07 11:34:31.887028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.210 [2024-10-07 11:34:31.912657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.210 [2024-10-07 11:34:31.912719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:50.210 [2024-10-07 11:34:31.912768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.628 ms 00:23:50.210 [2024-10-07 11:34:31.912793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.470 [2024-10-07 11:34:31.931688] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:23:50.470 [2024-10-07 11:34:31.931766] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:50.470 [2024-10-07 11:34:31.931801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.470 [2024-10-07 11:34:31.931833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:50.470 [2024-10-07 11:34:31.931857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.823 ms 00:23:50.470 [2024-10-07 11:34:31.931872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.470 [2024-10-07 11:34:31.961955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.470 [2024-10-07 11:34:31.962007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:50.470 [2024-10-07 11:34:31.962029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.009 ms 00:23:50.470 [2024-10-07 11:34:31.962053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.470 [2024-10-07 11:34:31.980808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.470 [2024-10-07 11:34:31.980847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:50.470 [2024-10-07 11:34:31.980872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.684 ms 00:23:50.470 [2024-10-07 11:34:31.980882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.470 [2024-10-07 11:34:31.998926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.470 [2024-10-07 11:34:31.999085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:50.470 [2024-10-07 11:34:31.999117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.982 ms 00:23:50.470 [2024-10-07 11:34:31.999128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.470 [2024-10-07 11:34:32.000023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.470 [2024-10-07 11:34:32.000059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:50.470 [2024-10-07 11:34:32.000074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.705 ms 00:23:50.470 [2024-10-07 11:34:32.000085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.470 [2024-10-07 11:34:32.087470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.470 [2024-10-07 11:34:32.087539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:50.470 [2024-10-07 11:34:32.087564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.485 ms 00:23:50.470 [2024-10-07 11:34:32.087581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.470 [2024-10-07 11:34:32.099098] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:50.470 [2024-10-07 11:34:32.115808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.470 [2024-10-07 11:34:32.115890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:50.470 [2024-10-07 11:34:32.115907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.142 ms 00:23:50.470 [2024-10-07 11:34:32.115923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.470 [2024-10-07 11:34:32.116080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.470 [2024-10-07 11:34:32.116100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:50.470 [2024-10-07 11:34:32.116112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:50.470 [2024-10-07 11:34:32.116127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.470 [2024-10-07 11:34:32.116193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.470 [2024-10-07 11:34:32.116210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:50.470 [2024-10-07 11:34:32.116221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:23:50.470 [2024-10-07 11:34:32.116237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.470 [2024-10-07 11:34:32.116263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.470 [2024-10-07 11:34:32.116279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:50.470 [2024-10-07 11:34:32.116290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:50.470 [2024-10-07 11:34:32.116314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.470 [2024-10-07 11:34:32.116353] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:50.470 [2024-10-07 11:34:32.116373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.470 [2024-10-07 11:34:32.116383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:50.470 [2024-10-07 11:34:32.116396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:23:50.470 [2024-10-07 11:34:32.116407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.470 [2024-10-07 11:34:32.153385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.470 [2024-10-07 11:34:32.153445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:50.470 [2024-10-07 11:34:32.153468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.003 ms 00:23:50.470 [2024-10-07 11:34:32.153481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.470 [2024-10-07 11:34:32.153632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.470 [2024-10-07 11:34:32.153649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:50.470 [2024-10-07 11:34:32.153666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:23:50.470 [2024-10-07 11:34:32.153679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.470 [2024-10-07 11:34:32.154983] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:50.470 [2024-10-07 11:34:32.159602] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 409.986 ms, result 0 00:23:50.470 [2024-10-07 11:34:32.160938] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:50.729 Some configs were skipped because the RPC state that can call them passed over. 00:23:50.729 11:34:32 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:23:50.730 [2024-10-07 11:34:32.420770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.730 [2024-10-07 11:34:32.420869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:23:50.730 [2024-10-07 11:34:32.420902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.544 ms 00:23:50.730 [2024-10-07 11:34:32.420925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.730 [2024-10-07 11:34:32.420983] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.760 ms, result 0 00:23:50.730 true 00:23:50.730 11:34:32 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:23:50.988 [2024-10-07 11:34:32.640035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.988 [2024-10-07 11:34:32.640086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:23:50.988 [2024-10-07 11:34:32.640105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.993 ms 00:23:50.988 [2024-10-07 11:34:32.640115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.988 [2024-10-07 11:34:32.640159] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.124 ms, result 0 00:23:50.988 true 00:23:50.988 11:34:32 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 76473 00:23:50.988 11:34:32 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 76473 ']' 00:23:50.988 11:34:32 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 76473 00:23:50.988 11:34:32 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:23:50.988 11:34:32 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:50.988 11:34:32 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76473 00:23:51.247 killing process with pid 76473 00:23:51.247 11:34:32 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:51.247 11:34:32 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:51.247 11:34:32 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76473' 00:23:51.247 11:34:32 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 76473 00:23:51.247 11:34:32 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 76473 00:23:52.192 [2024-10-07 11:34:33.820295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.192 [2024-10-07 11:34:33.820365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:52.192 [2024-10-07 11:34:33.820384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:52.192 [2024-10-07 11:34:33.820400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.192 [2024-10-07 11:34:33.820431] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:23:52.192 [2024-10-07 11:34:33.824367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.192 [2024-10-07 11:34:33.824405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:52.192 [2024-10-07 11:34:33.824424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.907 ms 00:23:52.192 [2024-10-07 11:34:33.824435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.192 [2024-10-07 11:34:33.824707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.192 [2024-10-07 11:34:33.824726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:52.192 [2024-10-07 11:34:33.824750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.206 ms 00:23:52.192 [2024-10-07 11:34:33.824764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.192 [2024-10-07 11:34:33.828262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.192 [2024-10-07 11:34:33.828301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:52.192 [2024-10-07 11:34:33.828317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.477 ms 00:23:52.192 [2024-10-07 11:34:33.828329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.192 [2024-10-07 11:34:33.835066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.192 [2024-10-07 11:34:33.835122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:52.192 [2024-10-07 11:34:33.835152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.703 ms 00:23:52.192 [2024-10-07 11:34:33.835173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.192 [2024-10-07 11:34:33.849548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.192 [2024-10-07 11:34:33.849585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:52.192 [2024-10-07 11:34:33.849607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.291 ms 00:23:52.192 [2024-10-07 11:34:33.849620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.192 [2024-10-07 11:34:33.859508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.193 [2024-10-07 11:34:33.859546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:52.193 [2024-10-07 11:34:33.859565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.850 ms 00:23:52.193 [2024-10-07 11:34:33.859589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.193 [2024-10-07 11:34:33.859729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.193 [2024-10-07 11:34:33.859762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:52.193 [2024-10-07 11:34:33.859779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:23:52.193 [2024-10-07 11:34:33.859795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.193 [2024-10-07 11:34:33.875035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.193 [2024-10-07 11:34:33.875071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:52.193 [2024-10-07 11:34:33.875094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.229 ms 00:23:52.193 [2024-10-07 11:34:33.875106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.193 [2024-10-07 11:34:33.889710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.193 [2024-10-07 11:34:33.889750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:52.193 [2024-10-07 11:34:33.889782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.560 ms 00:23:52.193 [2024-10-07 11:34:33.889794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.452 [2024-10-07 11:34:33.903683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.452 [2024-10-07 11:34:33.903731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:52.452 [2024-10-07 11:34:33.903763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.848 ms 00:23:52.452 [2024-10-07 11:34:33.903776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.452 [2024-10-07 11:34:33.918623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.452 [2024-10-07 11:34:33.918668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:52.452 [2024-10-07 11:34:33.918690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.770 ms 00:23:52.452 [2024-10-07 11:34:33.918700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.453 [2024-10-07 11:34:33.918773] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:52.453 [2024-10-07 11:34:33.918801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.918820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.918832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.918849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.918860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.918882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.918894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.918911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.918922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.918941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.918952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.918968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.918979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.918995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.919006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.919022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.919033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.919049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.919061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.919077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.919088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.919108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.919119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.919135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.919146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.919164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.919176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.919192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.919203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.919219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.919231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.919247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.919258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.919274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.919286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.919302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.919313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.919337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.919348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.919364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.919376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.919391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.919402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.919418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.919429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.919445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.919456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.919472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.919483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.919499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.919511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.919528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.919539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.919559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.919570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.919587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.919598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.919613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.919624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.919641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.919652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.919667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.919678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.919703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.919714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.919731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.919751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.919768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.919779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.919801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.919812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.919833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.919844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.919861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.919872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.919888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.919899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.919914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.919926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.919942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.919952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.919969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.919980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.919997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.920009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.920029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.920041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.920057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.920068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.920085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:52.453 [2024-10-07 11:34:33.920096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:52.454 [2024-10-07 11:34:33.920112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:52.454 [2024-10-07 11:34:33.920124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:52.454 [2024-10-07 11:34:33.920140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:52.454 [2024-10-07 11:34:33.920151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:52.454 [2024-10-07 11:34:33.920168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:52.454 [2024-10-07 11:34:33.920179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:52.454 [2024-10-07 11:34:33.920196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:52.454 [2024-10-07 11:34:33.920207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:52.454 [2024-10-07 11:34:33.920223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:52.454 [2024-10-07 11:34:33.920242] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:52.454 [2024-10-07 11:34:33.920262] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c659e091-29a7-4fa5-b97b-7822cea8a8a4 00:23:52.454 [2024-10-07 11:34:33.920273] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:52.454 [2024-10-07 11:34:33.920288] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:52.454 [2024-10-07 11:34:33.920299] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:52.454 [2024-10-07 11:34:33.920315] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:52.454 [2024-10-07 11:34:33.920339] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:52.454 [2024-10-07 11:34:33.920356] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:52.454 [2024-10-07 11:34:33.920371] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:52.454 [2024-10-07 11:34:33.920386] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:52.454 [2024-10-07 11:34:33.920395] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:52.454 [2024-10-07 11:34:33.920410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.454 [2024-10-07 11:34:33.920421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:52.454 [2024-10-07 11:34:33.920439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.643 ms 00:23:52.454 [2024-10-07 11:34:33.920450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.454 [2024-10-07 11:34:33.940667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.454 [2024-10-07 11:34:33.940700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:52.454 [2024-10-07 11:34:33.940727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.213 ms 00:23:52.454 [2024-10-07 11:34:33.940749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.454 [2024-10-07 11:34:33.941302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.454 [2024-10-07 11:34:33.941318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:52.454 [2024-10-07 11:34:33.941334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.484 ms 00:23:52.454 [2024-10-07 11:34:33.941345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.454 [2024-10-07 11:34:34.002427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:52.454 [2024-10-07 11:34:34.002463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:52.454 [2024-10-07 11:34:34.002482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:52.454 [2024-10-07 11:34:34.002499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.454 [2024-10-07 11:34:34.002601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:52.454 [2024-10-07 11:34:34.002614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:52.454 [2024-10-07 11:34:34.002630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:52.454 [2024-10-07 11:34:34.002640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.454 [2024-10-07 11:34:34.002702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:52.454 [2024-10-07 11:34:34.002716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:52.454 [2024-10-07 11:34:34.002736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:52.454 [2024-10-07 11:34:34.002776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.454 [2024-10-07 11:34:34.002810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:52.454 [2024-10-07 11:34:34.002822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:52.454 [2024-10-07 11:34:34.002838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:52.454 [2024-10-07 11:34:34.002848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.454 [2024-10-07 11:34:34.127836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:52.454 [2024-10-07 11:34:34.127892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:52.454 [2024-10-07 11:34:34.127914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:52.454 [2024-10-07 11:34:34.127926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.713 [2024-10-07 11:34:34.226799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:52.713 [2024-10-07 11:34:34.226853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:52.713 [2024-10-07 11:34:34.226878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:52.713 [2024-10-07 11:34:34.226892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.713 [2024-10-07 11:34:34.227001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:52.713 [2024-10-07 11:34:34.227016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:52.713 [2024-10-07 11:34:34.227041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:52.713 [2024-10-07 11:34:34.227054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.713 [2024-10-07 11:34:34.227093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:52.713 [2024-10-07 11:34:34.227119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:52.713 [2024-10-07 11:34:34.227138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:52.713 [2024-10-07 11:34:34.227151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.713 [2024-10-07 11:34:34.227276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:52.713 [2024-10-07 11:34:34.227292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:52.713 [2024-10-07 11:34:34.227323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:52.713 [2024-10-07 11:34:34.227338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.713 [2024-10-07 11:34:34.227388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:52.713 [2024-10-07 11:34:34.227404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:52.713 [2024-10-07 11:34:34.227428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:52.713 [2024-10-07 11:34:34.227441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.713 [2024-10-07 11:34:34.227493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:52.713 [2024-10-07 11:34:34.227507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:52.713 [2024-10-07 11:34:34.227531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:52.713 [2024-10-07 11:34:34.227544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.713 [2024-10-07 11:34:34.227601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:52.713 [2024-10-07 11:34:34.227621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:52.713 [2024-10-07 11:34:34.227637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:52.713 [2024-10-07 11:34:34.227650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.713 [2024-10-07 11:34:34.227818] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 408.164 ms, result 0 00:23:54.089 11:34:35 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:23:54.089 11:34:35 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:54.089 [2024-10-07 11:34:35.501733] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:23:54.089 [2024-10-07 11:34:35.501871] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76542 ] 00:23:54.089 [2024-10-07 11:34:35.674818] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:54.349 [2024-10-07 11:34:35.899609] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:23:54.608 [2024-10-07 11:34:36.273479] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:54.608 [2024-10-07 11:34:36.273553] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:54.872 [2024-10-07 11:34:36.436095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.872 [2024-10-07 11:34:36.436151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:54.872 [2024-10-07 11:34:36.436170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:54.872 [2024-10-07 11:34:36.436181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.872 [2024-10-07 11:34:36.439409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.872 [2024-10-07 11:34:36.439454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:54.872 [2024-10-07 11:34:36.439468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.210 ms 00:23:54.872 [2024-10-07 11:34:36.439478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.872 [2024-10-07 11:34:36.439582] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:54.872 [2024-10-07 11:34:36.440642] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:54.872 [2024-10-07 11:34:36.440677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.872 [2024-10-07 11:34:36.440688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:54.872 [2024-10-07 11:34:36.440703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.106 ms 00:23:54.872 [2024-10-07 11:34:36.440713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.872 [2024-10-07 11:34:36.442228] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:54.872 [2024-10-07 11:34:36.462572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.872 [2024-10-07 11:34:36.462618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:54.872 [2024-10-07 11:34:36.462633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.378 ms 00:23:54.872 [2024-10-07 11:34:36.462644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.872 [2024-10-07 11:34:36.462779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.872 [2024-10-07 11:34:36.462795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:54.872 [2024-10-07 11:34:36.462810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:23:54.872 [2024-10-07 11:34:36.462820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.872 [2024-10-07 11:34:36.469878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.872 [2024-10-07 11:34:36.469910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:54.872 [2024-10-07 11:34:36.469924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.023 ms 00:23:54.872 [2024-10-07 11:34:36.469935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.872 [2024-10-07 11:34:36.470041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.872 [2024-10-07 11:34:36.470059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:54.872 [2024-10-07 11:34:36.470071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:23:54.872 [2024-10-07 11:34:36.470083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.872 [2024-10-07 11:34:36.470115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.872 [2024-10-07 11:34:36.470127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:54.872 [2024-10-07 11:34:36.470138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:54.872 [2024-10-07 11:34:36.470165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.872 [2024-10-07 11:34:36.470192] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:23:54.872 [2024-10-07 11:34:36.475277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.872 [2024-10-07 11:34:36.475312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:54.872 [2024-10-07 11:34:36.475325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.101 ms 00:23:54.872 [2024-10-07 11:34:36.475351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.872 [2024-10-07 11:34:36.475438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.872 [2024-10-07 11:34:36.475455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:54.872 [2024-10-07 11:34:36.475467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:54.872 [2024-10-07 11:34:36.475477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.872 [2024-10-07 11:34:36.475502] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:54.872 [2024-10-07 11:34:36.475525] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:54.872 [2024-10-07 11:34:36.475562] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:54.872 [2024-10-07 11:34:36.475581] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:54.872 [2024-10-07 11:34:36.475674] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:54.872 [2024-10-07 11:34:36.475702] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:54.872 [2024-10-07 11:34:36.475716] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:54.872 [2024-10-07 11:34:36.475729] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:54.872 [2024-10-07 11:34:36.475752] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:54.872 [2024-10-07 11:34:36.475774] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:23:54.872 [2024-10-07 11:34:36.475784] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:54.872 [2024-10-07 11:34:36.475810] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:54.872 [2024-10-07 11:34:36.475820] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:54.872 [2024-10-07 11:34:36.475832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.872 [2024-10-07 11:34:36.475843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:54.872 [2024-10-07 11:34:36.475858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.333 ms 00:23:54.872 [2024-10-07 11:34:36.475868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.872 [2024-10-07 11:34:36.475956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.872 [2024-10-07 11:34:36.475976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:54.872 [2024-10-07 11:34:36.475987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:23:54.872 [2024-10-07 11:34:36.475998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.872 [2024-10-07 11:34:36.476092] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:54.872 [2024-10-07 11:34:36.476106] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:54.872 [2024-10-07 11:34:36.476117] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:54.872 [2024-10-07 11:34:36.476132] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:54.872 [2024-10-07 11:34:36.476143] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:54.872 [2024-10-07 11:34:36.476153] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:54.872 [2024-10-07 11:34:36.476163] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:23:54.872 [2024-10-07 11:34:36.476173] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:54.872 [2024-10-07 11:34:36.476183] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:23:54.872 [2024-10-07 11:34:36.476193] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:54.872 [2024-10-07 11:34:36.476202] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:54.872 [2024-10-07 11:34:36.476224] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:23:54.872 [2024-10-07 11:34:36.476234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:54.872 [2024-10-07 11:34:36.476256] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:54.872 [2024-10-07 11:34:36.476267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:23:54.872 [2024-10-07 11:34:36.476277] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:54.872 [2024-10-07 11:34:36.476288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:54.872 [2024-10-07 11:34:36.476297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:23:54.872 [2024-10-07 11:34:36.476307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:54.873 [2024-10-07 11:34:36.476317] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:54.873 [2024-10-07 11:34:36.476327] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:23:54.873 [2024-10-07 11:34:36.476337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:54.873 [2024-10-07 11:34:36.476346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:54.873 [2024-10-07 11:34:36.476356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:23:54.873 [2024-10-07 11:34:36.476366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:54.873 [2024-10-07 11:34:36.476376] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:54.873 [2024-10-07 11:34:36.476394] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:23:54.873 [2024-10-07 11:34:36.476403] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:54.873 [2024-10-07 11:34:36.476413] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:54.873 [2024-10-07 11:34:36.476423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:23:54.873 [2024-10-07 11:34:36.476433] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:54.873 [2024-10-07 11:34:36.476442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:54.873 [2024-10-07 11:34:36.476460] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:23:54.873 [2024-10-07 11:34:36.476471] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:54.873 [2024-10-07 11:34:36.476481] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:54.873 [2024-10-07 11:34:36.476491] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:23:54.873 [2024-10-07 11:34:36.476501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:54.873 [2024-10-07 11:34:36.476511] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:54.873 [2024-10-07 11:34:36.476529] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:23:54.873 [2024-10-07 11:34:36.476539] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:54.873 [2024-10-07 11:34:36.476549] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:54.873 [2024-10-07 11:34:36.476559] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:23:54.873 [2024-10-07 11:34:36.476569] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:54.873 [2024-10-07 11:34:36.476578] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:54.873 [2024-10-07 11:34:36.476589] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:54.873 [2024-10-07 11:34:36.476600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:54.873 [2024-10-07 11:34:36.476611] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:54.873 [2024-10-07 11:34:36.476622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:54.873 [2024-10-07 11:34:36.476631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:54.873 [2024-10-07 11:34:36.476641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:54.873 [2024-10-07 11:34:36.476656] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:54.873 [2024-10-07 11:34:36.476666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:54.873 [2024-10-07 11:34:36.476676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:54.873 [2024-10-07 11:34:36.476688] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:54.873 [2024-10-07 11:34:36.476701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:54.873 [2024-10-07 11:34:36.476717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:23:54.873 [2024-10-07 11:34:36.476728] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:23:54.873 [2024-10-07 11:34:36.476755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:23:54.873 [2024-10-07 11:34:36.476767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:23:54.873 [2024-10-07 11:34:36.476778] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:23:54.873 [2024-10-07 11:34:36.476789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:23:54.873 [2024-10-07 11:34:36.476800] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:23:54.873 [2024-10-07 11:34:36.476811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:23:54.873 [2024-10-07 11:34:36.476821] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:23:54.873 [2024-10-07 11:34:36.476832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:23:54.873 [2024-10-07 11:34:36.476844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:23:54.873 [2024-10-07 11:34:36.476854] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:23:54.873 [2024-10-07 11:34:36.476865] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:23:54.873 [2024-10-07 11:34:36.476876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:23:54.873 [2024-10-07 11:34:36.476887] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:54.873 [2024-10-07 11:34:36.476898] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:54.873 [2024-10-07 11:34:36.476910] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:54.873 [2024-10-07 11:34:36.476921] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:54.873 [2024-10-07 11:34:36.476931] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:54.873 [2024-10-07 11:34:36.476942] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:54.873 [2024-10-07 11:34:36.476953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.873 [2024-10-07 11:34:36.476967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:54.873 [2024-10-07 11:34:36.476979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.920 ms 00:23:54.873 [2024-10-07 11:34:36.476989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.873 [2024-10-07 11:34:36.524452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.873 [2024-10-07 11:34:36.524515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:54.873 [2024-10-07 11:34:36.524532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.475 ms 00:23:54.873 [2024-10-07 11:34:36.524543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.873 [2024-10-07 11:34:36.524720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.873 [2024-10-07 11:34:36.524733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:54.873 [2024-10-07 11:34:36.524757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:23:54.873 [2024-10-07 11:34:36.524767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.873 [2024-10-07 11:34:36.573041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.873 [2024-10-07 11:34:36.573105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:54.873 [2024-10-07 11:34:36.573122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.323 ms 00:23:54.873 [2024-10-07 11:34:36.573132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.873 [2024-10-07 11:34:36.573311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.873 [2024-10-07 11:34:36.573326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:54.873 [2024-10-07 11:34:36.573352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:54.873 [2024-10-07 11:34:36.573363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.873 [2024-10-07 11:34:36.573856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.873 [2024-10-07 11:34:36.573881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:54.873 [2024-10-07 11:34:36.573894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.461 ms 00:23:54.873 [2024-10-07 11:34:36.573904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.873 [2024-10-07 11:34:36.574035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.873 [2024-10-07 11:34:36.574056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:54.873 [2024-10-07 11:34:36.574067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:23:54.873 [2024-10-07 11:34:36.574077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.142 [2024-10-07 11:34:36.595077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.142 [2024-10-07 11:34:36.595144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:55.142 [2024-10-07 11:34:36.595160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.006 ms 00:23:55.142 [2024-10-07 11:34:36.595172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.142 [2024-10-07 11:34:36.615665] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:23:55.142 [2024-10-07 11:34:36.615728] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:55.142 [2024-10-07 11:34:36.615753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.142 [2024-10-07 11:34:36.615765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:55.142 [2024-10-07 11:34:36.615778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.423 ms 00:23:55.142 [2024-10-07 11:34:36.615789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.142 [2024-10-07 11:34:36.646355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.142 [2024-10-07 11:34:36.646401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:55.142 [2024-10-07 11:34:36.646424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.515 ms 00:23:55.142 [2024-10-07 11:34:36.646435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.142 [2024-10-07 11:34:36.664997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.142 [2024-10-07 11:34:36.665039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:55.142 [2024-10-07 11:34:36.665052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.501 ms 00:23:55.142 [2024-10-07 11:34:36.665062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.142 [2024-10-07 11:34:36.682979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.142 [2024-10-07 11:34:36.683015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:55.142 [2024-10-07 11:34:36.683028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.864 ms 00:23:55.142 [2024-10-07 11:34:36.683037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.142 [2024-10-07 11:34:36.683958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.142 [2024-10-07 11:34:36.683991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:55.142 [2024-10-07 11:34:36.684004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.796 ms 00:23:55.142 [2024-10-07 11:34:36.684014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.142 [2024-10-07 11:34:36.772107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.142 [2024-10-07 11:34:36.772180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:55.142 [2024-10-07 11:34:36.772197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.204 ms 00:23:55.142 [2024-10-07 11:34:36.772208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.142 [2024-10-07 11:34:36.783693] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:55.142 [2024-10-07 11:34:36.800323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.142 [2024-10-07 11:34:36.800378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:55.142 [2024-10-07 11:34:36.800394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.003 ms 00:23:55.142 [2024-10-07 11:34:36.800404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.142 [2024-10-07 11:34:36.800550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.142 [2024-10-07 11:34:36.800564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:55.142 [2024-10-07 11:34:36.800576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:55.142 [2024-10-07 11:34:36.800587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.142 [2024-10-07 11:34:36.800651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.142 [2024-10-07 11:34:36.800666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:55.142 [2024-10-07 11:34:36.800677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:23:55.142 [2024-10-07 11:34:36.800688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.142 [2024-10-07 11:34:36.800714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.142 [2024-10-07 11:34:36.800726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:55.142 [2024-10-07 11:34:36.800736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:55.142 [2024-10-07 11:34:36.800763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.142 [2024-10-07 11:34:36.800802] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:55.142 [2024-10-07 11:34:36.800814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.142 [2024-10-07 11:34:36.800824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:55.142 [2024-10-07 11:34:36.800838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:23:55.142 [2024-10-07 11:34:36.800847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.142 [2024-10-07 11:34:36.838921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.142 [2024-10-07 11:34:36.838965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:55.142 [2024-10-07 11:34:36.838980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.109 ms 00:23:55.142 [2024-10-07 11:34:36.838991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.142 [2024-10-07 11:34:36.839116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.142 [2024-10-07 11:34:36.839134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:55.142 [2024-10-07 11:34:36.839145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:23:55.142 [2024-10-07 11:34:36.839155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.142 [2024-10-07 11:34:36.840087] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:55.142 [2024-10-07 11:34:36.844558] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 404.314 ms, result 0 00:23:55.142 [2024-10-07 11:34:36.845246] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:55.401 [2024-10-07 11:34:36.863615] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:56.335  [2024-10-07T11:34:38.978Z] Copying: 29/256 [MB] (29 MBps) [2024-10-07T11:34:39.912Z] Copying: 57/256 [MB] (27 MBps) [2024-10-07T11:34:41.286Z] Copying: 84/256 [MB] (26 MBps) [2024-10-07T11:34:42.220Z] Copying: 111/256 [MB] (27 MBps) [2024-10-07T11:34:43.154Z] Copying: 139/256 [MB] (27 MBps) [2024-10-07T11:34:44.089Z] Copying: 166/256 [MB] (27 MBps) [2024-10-07T11:34:45.024Z] Copying: 192/256 [MB] (26 MBps) [2024-10-07T11:34:45.958Z] Copying: 218/256 [MB] (26 MBps) [2024-10-07T11:34:46.226Z] Copying: 246/256 [MB] (27 MBps) [2024-10-07T11:34:46.226Z] Copying: 256/256 [MB] (average 27 MBps)[2024-10-07 11:34:46.191405] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:04.515 [2024-10-07 11:34:46.206386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.515 [2024-10-07 11:34:46.206431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:04.515 [2024-10-07 11:34:46.206448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:04.515 [2024-10-07 11:34:46.206460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.515 [2024-10-07 11:34:46.206486] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:24:04.515 [2024-10-07 11:34:46.210650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.515 [2024-10-07 11:34:46.210679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:04.515 [2024-10-07 11:34:46.210691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.154 ms 00:24:04.515 [2024-10-07 11:34:46.210707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.515 [2024-10-07 11:34:46.210976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.515 [2024-10-07 11:34:46.211002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:04.515 [2024-10-07 11:34:46.211021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.244 ms 00:24:04.515 [2024-10-07 11:34:46.211031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.515 [2024-10-07 11:34:46.213927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.515 [2024-10-07 11:34:46.213952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:04.515 [2024-10-07 11:34:46.213965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.884 ms 00:24:04.515 [2024-10-07 11:34:46.213975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.817 [2024-10-07 11:34:46.219815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.817 [2024-10-07 11:34:46.219847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:04.817 [2024-10-07 11:34:46.219868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.830 ms 00:24:04.817 [2024-10-07 11:34:46.219879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.817 [2024-10-07 11:34:46.256754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.817 [2024-10-07 11:34:46.256826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:04.817 [2024-10-07 11:34:46.256843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.861 ms 00:24:04.817 [2024-10-07 11:34:46.256854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.817 [2024-10-07 11:34:46.279421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.817 [2024-10-07 11:34:46.279500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:04.817 [2024-10-07 11:34:46.279517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.492 ms 00:24:04.817 [2024-10-07 11:34:46.279529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.817 [2024-10-07 11:34:46.279726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.817 [2024-10-07 11:34:46.279754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:04.817 [2024-10-07 11:34:46.279767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:24:04.817 [2024-10-07 11:34:46.279777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.817 [2024-10-07 11:34:46.319135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.817 [2024-10-07 11:34:46.319204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:04.817 [2024-10-07 11:34:46.319221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.381 ms 00:24:04.817 [2024-10-07 11:34:46.319231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.817 [2024-10-07 11:34:46.359205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.817 [2024-10-07 11:34:46.359276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:04.817 [2024-10-07 11:34:46.359292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.921 ms 00:24:04.817 [2024-10-07 11:34:46.359302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.817 [2024-10-07 11:34:46.398710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.817 [2024-10-07 11:34:46.398797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:04.817 [2024-10-07 11:34:46.398813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.358 ms 00:24:04.817 [2024-10-07 11:34:46.398824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.817 [2024-10-07 11:34:46.436402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.817 [2024-10-07 11:34:46.436472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:04.817 [2024-10-07 11:34:46.436488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.480 ms 00:24:04.817 [2024-10-07 11:34:46.436501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.817 [2024-10-07 11:34:46.436589] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:04.817 [2024-10-07 11:34:46.436609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:04.817 [2024-10-07 11:34:46.436624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:04.817 [2024-10-07 11:34:46.436635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:04.817 [2024-10-07 11:34:46.436646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:04.817 [2024-10-07 11:34:46.436657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:04.817 [2024-10-07 11:34:46.436668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:04.817 [2024-10-07 11:34:46.436679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:04.817 [2024-10-07 11:34:46.436691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:04.817 [2024-10-07 11:34:46.436701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:04.817 [2024-10-07 11:34:46.436712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:04.817 [2024-10-07 11:34:46.436722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:04.817 [2024-10-07 11:34:46.436732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:04.817 [2024-10-07 11:34:46.436757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:04.817 [2024-10-07 11:34:46.436768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:04.817 [2024-10-07 11:34:46.436779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:04.817 [2024-10-07 11:34:46.436789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:04.817 [2024-10-07 11:34:46.436800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:04.817 [2024-10-07 11:34:46.436810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:04.817 [2024-10-07 11:34:46.436821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:04.817 [2024-10-07 11:34:46.436831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:04.817 [2024-10-07 11:34:46.436842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:04.817 [2024-10-07 11:34:46.436853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:04.817 [2024-10-07 11:34:46.436864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:04.817 [2024-10-07 11:34:46.436875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:04.817 [2024-10-07 11:34:46.436886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:04.817 [2024-10-07 11:34:46.436896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:04.817 [2024-10-07 11:34:46.436907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:04.817 [2024-10-07 11:34:46.436918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:04.817 [2024-10-07 11:34:46.436929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:04.817 [2024-10-07 11:34:46.436940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:04.817 [2024-10-07 11:34:46.436951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:04.817 [2024-10-07 11:34:46.436961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:04.817 [2024-10-07 11:34:46.436971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:04.817 [2024-10-07 11:34:46.436982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:04.817 [2024-10-07 11:34:46.436992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:04.817 [2024-10-07 11:34:46.437003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:04.817 [2024-10-07 11:34:46.437014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:04.817 [2024-10-07 11:34:46.437024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:04.817 [2024-10-07 11:34:46.437035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:04.817 [2024-10-07 11:34:46.437045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:04.817 [2024-10-07 11:34:46.437056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:04.817 [2024-10-07 11:34:46.437066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:04.817 [2024-10-07 11:34:46.437077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:04.818 [2024-10-07 11:34:46.437087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:04.818 [2024-10-07 11:34:46.437097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:04.818 [2024-10-07 11:34:46.437108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:04.818 [2024-10-07 11:34:46.437118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:04.818 [2024-10-07 11:34:46.437128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:04.818 [2024-10-07 11:34:46.437139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:04.818 [2024-10-07 11:34:46.437149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:04.818 [2024-10-07 11:34:46.437159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:04.818 [2024-10-07 11:34:46.437170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:04.818 [2024-10-07 11:34:46.437180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:04.818 [2024-10-07 11:34:46.437190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:04.818 [2024-10-07 11:34:46.437201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:04.818 [2024-10-07 11:34:46.437211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:04.818 [2024-10-07 11:34:46.437221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:04.818 [2024-10-07 11:34:46.437231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:04.818 [2024-10-07 11:34:46.437241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:04.818 [2024-10-07 11:34:46.437254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:04.818 [2024-10-07 11:34:46.437265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:04.818 [2024-10-07 11:34:46.437276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:04.818 [2024-10-07 11:34:46.437287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:04.818 [2024-10-07 11:34:46.437298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:04.818 [2024-10-07 11:34:46.437308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:04.818 [2024-10-07 11:34:46.437319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:04.818 [2024-10-07 11:34:46.437329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:04.818 [2024-10-07 11:34:46.437340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:04.818 [2024-10-07 11:34:46.437350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:04.818 [2024-10-07 11:34:46.437361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:04.818 [2024-10-07 11:34:46.437371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:04.818 [2024-10-07 11:34:46.437382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:04.818 [2024-10-07 11:34:46.437392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:04.818 [2024-10-07 11:34:46.437402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:04.818 [2024-10-07 11:34:46.437413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:04.818 [2024-10-07 11:34:46.437423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:04.818 [2024-10-07 11:34:46.437435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:04.818 [2024-10-07 11:34:46.437445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:04.818 [2024-10-07 11:34:46.437455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:04.818 [2024-10-07 11:34:46.437466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:04.818 [2024-10-07 11:34:46.437476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:04.818 [2024-10-07 11:34:46.437487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:04.818 [2024-10-07 11:34:46.437497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:04.818 [2024-10-07 11:34:46.437506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:04.818 [2024-10-07 11:34:46.437516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:04.818 [2024-10-07 11:34:46.437526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:04.818 [2024-10-07 11:34:46.437537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:04.818 [2024-10-07 11:34:46.437547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:04.818 [2024-10-07 11:34:46.437557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:04.818 [2024-10-07 11:34:46.437567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:04.818 [2024-10-07 11:34:46.437577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:04.818 [2024-10-07 11:34:46.437588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:04.818 [2024-10-07 11:34:46.437599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:04.818 [2024-10-07 11:34:46.437610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:04.818 [2024-10-07 11:34:46.437620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:04.818 [2024-10-07 11:34:46.437631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:04.818 [2024-10-07 11:34:46.437641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:04.818 [2024-10-07 11:34:46.437653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:04.818 [2024-10-07 11:34:46.437664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:04.818 [2024-10-07 11:34:46.437703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:04.818 [2024-10-07 11:34:46.437722] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:04.818 [2024-10-07 11:34:46.437733] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c659e091-29a7-4fa5-b97b-7822cea8a8a4 00:24:04.818 [2024-10-07 11:34:46.437753] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:04.818 [2024-10-07 11:34:46.437763] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:04.818 [2024-10-07 11:34:46.437774] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:04.818 [2024-10-07 11:34:46.437798] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:04.818 [2024-10-07 11:34:46.437808] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:04.818 [2024-10-07 11:34:46.437818] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:04.818 [2024-10-07 11:34:46.437829] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:04.818 [2024-10-07 11:34:46.437838] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:04.818 [2024-10-07 11:34:46.437847] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:04.818 [2024-10-07 11:34:46.437857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.818 [2024-10-07 11:34:46.437869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:04.818 [2024-10-07 11:34:46.437880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.272 ms 00:24:04.818 [2024-10-07 11:34:46.437889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.818 [2024-10-07 11:34:46.457474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.818 [2024-10-07 11:34:46.457552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:04.818 [2024-10-07 11:34:46.457566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.587 ms 00:24:04.818 [2024-10-07 11:34:46.457577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.818 [2024-10-07 11:34:46.458157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.818 [2024-10-07 11:34:46.458176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:04.818 [2024-10-07 11:34:46.458188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.507 ms 00:24:04.818 [2024-10-07 11:34:46.458199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.818 [2024-10-07 11:34:46.506413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:04.818 [2024-10-07 11:34:46.506472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:04.818 [2024-10-07 11:34:46.506487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:04.818 [2024-10-07 11:34:46.506498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.818 [2024-10-07 11:34:46.506629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:04.818 [2024-10-07 11:34:46.506642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:04.818 [2024-10-07 11:34:46.506653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:04.818 [2024-10-07 11:34:46.506663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.818 [2024-10-07 11:34:46.506720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:04.818 [2024-10-07 11:34:46.506749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:04.818 [2024-10-07 11:34:46.506766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:04.818 [2024-10-07 11:34:46.506776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.818 [2024-10-07 11:34:46.506795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:04.818 [2024-10-07 11:34:46.506806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:04.818 [2024-10-07 11:34:46.506816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:04.818 [2024-10-07 11:34:46.506826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.078 [2024-10-07 11:34:46.630758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.078 [2024-10-07 11:34:46.630829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:05.078 [2024-10-07 11:34:46.630845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.078 [2024-10-07 11:34:46.630856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.078 [2024-10-07 11:34:46.730458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.078 [2024-10-07 11:34:46.730527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:05.078 [2024-10-07 11:34:46.730543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.078 [2024-10-07 11:34:46.730555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.078 [2024-10-07 11:34:46.730633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.078 [2024-10-07 11:34:46.730645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:05.078 [2024-10-07 11:34:46.730656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.078 [2024-10-07 11:34:46.730678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.078 [2024-10-07 11:34:46.730708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.078 [2024-10-07 11:34:46.730719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:05.078 [2024-10-07 11:34:46.730729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.078 [2024-10-07 11:34:46.730753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.078 [2024-10-07 11:34:46.730893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.078 [2024-10-07 11:34:46.730908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:05.078 [2024-10-07 11:34:46.730919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.078 [2024-10-07 11:34:46.730936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.078 [2024-10-07 11:34:46.730976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.078 [2024-10-07 11:34:46.730989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:05.078 [2024-10-07 11:34:46.730999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.078 [2024-10-07 11:34:46.731009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.078 [2024-10-07 11:34:46.731049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.078 [2024-10-07 11:34:46.731060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:05.078 [2024-10-07 11:34:46.731070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.078 [2024-10-07 11:34:46.731081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.078 [2024-10-07 11:34:46.731135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.078 [2024-10-07 11:34:46.731147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:05.078 [2024-10-07 11:34:46.731157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.078 [2024-10-07 11:34:46.731168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.078 [2024-10-07 11:34:46.731341] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 525.785 ms, result 0 00:24:06.456 00:24:06.456 00:24:06.456 11:34:47 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:24:06.456 11:34:47 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:24:06.713 11:34:48 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:06.971 [2024-10-07 11:34:48.482130] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:24:06.971 [2024-10-07 11:34:48.482266] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76680 ] 00:24:06.971 [2024-10-07 11:34:48.642847] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.230 [2024-10-07 11:34:48.862416] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:24:07.803 [2024-10-07 11:34:49.225112] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:07.803 [2024-10-07 11:34:49.225185] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:07.803 [2024-10-07 11:34:49.387488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.803 [2024-10-07 11:34:49.387559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:07.803 [2024-10-07 11:34:49.387580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:07.803 [2024-10-07 11:34:49.387599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.803 [2024-10-07 11:34:49.391009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.803 [2024-10-07 11:34:49.391055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:07.803 [2024-10-07 11:34:49.391068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.390 ms 00:24:07.803 [2024-10-07 11:34:49.391079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.803 [2024-10-07 11:34:49.391213] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:07.803 [2024-10-07 11:34:49.392233] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:07.803 [2024-10-07 11:34:49.392268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.803 [2024-10-07 11:34:49.392280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:07.803 [2024-10-07 11:34:49.392296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.069 ms 00:24:07.803 [2024-10-07 11:34:49.392306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.803 [2024-10-07 11:34:49.393825] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:07.803 [2024-10-07 11:34:49.414535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.803 [2024-10-07 11:34:49.414585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:07.803 [2024-10-07 11:34:49.414602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.743 ms 00:24:07.803 [2024-10-07 11:34:49.414613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.803 [2024-10-07 11:34:49.414799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.803 [2024-10-07 11:34:49.414817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:07.803 [2024-10-07 11:34:49.414832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:24:07.803 [2024-10-07 11:34:49.414844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.803 [2024-10-07 11:34:49.422137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.803 [2024-10-07 11:34:49.422175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:07.803 [2024-10-07 11:34:49.422188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.255 ms 00:24:07.803 [2024-10-07 11:34:49.422199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.803 [2024-10-07 11:34:49.422319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.803 [2024-10-07 11:34:49.422340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:07.803 [2024-10-07 11:34:49.422352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:24:07.803 [2024-10-07 11:34:49.422362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.803 [2024-10-07 11:34:49.422396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.803 [2024-10-07 11:34:49.422408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:07.803 [2024-10-07 11:34:49.422419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:07.803 [2024-10-07 11:34:49.422429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.803 [2024-10-07 11:34:49.422455] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:24:07.803 [2024-10-07 11:34:49.427385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.803 [2024-10-07 11:34:49.427420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:07.803 [2024-10-07 11:34:49.427433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.945 ms 00:24:07.803 [2024-10-07 11:34:49.427443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.803 [2024-10-07 11:34:49.427526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.803 [2024-10-07 11:34:49.427543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:07.803 [2024-10-07 11:34:49.427554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:07.803 [2024-10-07 11:34:49.427565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.803 [2024-10-07 11:34:49.427589] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:07.803 [2024-10-07 11:34:49.427612] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:07.803 [2024-10-07 11:34:49.427648] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:07.803 [2024-10-07 11:34:49.427666] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:07.803 [2024-10-07 11:34:49.427771] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:07.803 [2024-10-07 11:34:49.427785] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:07.803 [2024-10-07 11:34:49.427799] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:07.803 [2024-10-07 11:34:49.427812] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:07.803 [2024-10-07 11:34:49.427824] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:07.803 [2024-10-07 11:34:49.427835] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:24:07.803 [2024-10-07 11:34:49.427846] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:07.803 [2024-10-07 11:34:49.427856] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:07.803 [2024-10-07 11:34:49.427866] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:07.803 [2024-10-07 11:34:49.427877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.803 [2024-10-07 11:34:49.427887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:07.803 [2024-10-07 11:34:49.427901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:24:07.803 [2024-10-07 11:34:49.427912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.803 [2024-10-07 11:34:49.427990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.803 [2024-10-07 11:34:49.428001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:07.803 [2024-10-07 11:34:49.428011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:24:07.803 [2024-10-07 11:34:49.428021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.803 [2024-10-07 11:34:49.428110] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:07.803 [2024-10-07 11:34:49.428123] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:07.803 [2024-10-07 11:34:49.428133] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:07.803 [2024-10-07 11:34:49.428147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:07.803 [2024-10-07 11:34:49.428158] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:07.803 [2024-10-07 11:34:49.428167] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:07.803 [2024-10-07 11:34:49.428176] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:24:07.803 [2024-10-07 11:34:49.428185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:07.803 [2024-10-07 11:34:49.428195] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:24:07.803 [2024-10-07 11:34:49.428204] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:07.803 [2024-10-07 11:34:49.428214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:07.803 [2024-10-07 11:34:49.428235] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:24:07.803 [2024-10-07 11:34:49.428244] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:07.803 [2024-10-07 11:34:49.428254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:07.803 [2024-10-07 11:34:49.428263] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:24:07.803 [2024-10-07 11:34:49.428272] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:07.803 [2024-10-07 11:34:49.428282] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:07.803 [2024-10-07 11:34:49.428291] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:24:07.803 [2024-10-07 11:34:49.428300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:07.804 [2024-10-07 11:34:49.428310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:07.804 [2024-10-07 11:34:49.428319] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:24:07.804 [2024-10-07 11:34:49.428328] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:07.804 [2024-10-07 11:34:49.428337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:07.804 [2024-10-07 11:34:49.428346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:24:07.804 [2024-10-07 11:34:49.428355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:07.804 [2024-10-07 11:34:49.428365] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:07.804 [2024-10-07 11:34:49.428374] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:24:07.804 [2024-10-07 11:34:49.428384] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:07.804 [2024-10-07 11:34:49.428393] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:07.804 [2024-10-07 11:34:49.428402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:24:07.804 [2024-10-07 11:34:49.428410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:07.804 [2024-10-07 11:34:49.428419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:07.804 [2024-10-07 11:34:49.428429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:24:07.804 [2024-10-07 11:34:49.428438] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:07.804 [2024-10-07 11:34:49.428447] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:07.804 [2024-10-07 11:34:49.428456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:24:07.804 [2024-10-07 11:34:49.428465] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:07.804 [2024-10-07 11:34:49.428474] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:07.804 [2024-10-07 11:34:49.428482] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:24:07.804 [2024-10-07 11:34:49.428491] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:07.804 [2024-10-07 11:34:49.428501] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:07.804 [2024-10-07 11:34:49.428509] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:24:07.804 [2024-10-07 11:34:49.428519] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:07.804 [2024-10-07 11:34:49.428528] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:07.804 [2024-10-07 11:34:49.428538] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:07.804 [2024-10-07 11:34:49.428548] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:07.804 [2024-10-07 11:34:49.428558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:07.804 [2024-10-07 11:34:49.428570] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:07.804 [2024-10-07 11:34:49.428579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:07.804 [2024-10-07 11:34:49.428589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:07.804 [2024-10-07 11:34:49.428598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:07.804 [2024-10-07 11:34:49.428607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:07.804 [2024-10-07 11:34:49.428616] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:07.804 [2024-10-07 11:34:49.428627] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:07.804 [2024-10-07 11:34:49.428639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:07.804 [2024-10-07 11:34:49.428654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:24:07.804 [2024-10-07 11:34:49.428664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:24:07.804 [2024-10-07 11:34:49.428674] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:24:07.804 [2024-10-07 11:34:49.428684] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:24:07.804 [2024-10-07 11:34:49.428696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:24:07.804 [2024-10-07 11:34:49.428707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:24:07.804 [2024-10-07 11:34:49.428718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:24:07.804 [2024-10-07 11:34:49.428728] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:24:07.804 [2024-10-07 11:34:49.428748] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:24:07.804 [2024-10-07 11:34:49.428759] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:24:07.804 [2024-10-07 11:34:49.428770] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:24:07.804 [2024-10-07 11:34:49.428781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:24:07.804 [2024-10-07 11:34:49.428792] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:24:07.804 [2024-10-07 11:34:49.428802] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:24:07.804 [2024-10-07 11:34:49.428813] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:07.804 [2024-10-07 11:34:49.428824] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:07.804 [2024-10-07 11:34:49.428836] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:07.804 [2024-10-07 11:34:49.428846] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:07.804 [2024-10-07 11:34:49.428857] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:07.804 [2024-10-07 11:34:49.428868] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:07.804 [2024-10-07 11:34:49.428879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.804 [2024-10-07 11:34:49.428893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:07.804 [2024-10-07 11:34:49.428903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.823 ms 00:24:07.804 [2024-10-07 11:34:49.428913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.804 [2024-10-07 11:34:49.478225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.804 [2024-10-07 11:34:49.478294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:07.804 [2024-10-07 11:34:49.478311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.331 ms 00:24:07.804 [2024-10-07 11:34:49.478322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.804 [2024-10-07 11:34:49.478520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.804 [2024-10-07 11:34:49.478534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:07.804 [2024-10-07 11:34:49.478546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:24:07.804 [2024-10-07 11:34:49.478556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.081 [2024-10-07 11:34:49.525407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.081 [2024-10-07 11:34:49.525474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:08.081 [2024-10-07 11:34:49.525490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.899 ms 00:24:08.081 [2024-10-07 11:34:49.525501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.081 [2024-10-07 11:34:49.525633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.081 [2024-10-07 11:34:49.525645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:08.082 [2024-10-07 11:34:49.525656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:08.082 [2024-10-07 11:34:49.525666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.082 [2024-10-07 11:34:49.526134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.082 [2024-10-07 11:34:49.526156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:08.082 [2024-10-07 11:34:49.526167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.440 ms 00:24:08.082 [2024-10-07 11:34:49.526178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.082 [2024-10-07 11:34:49.526315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.082 [2024-10-07 11:34:49.526332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:08.082 [2024-10-07 11:34:49.526343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:24:08.082 [2024-10-07 11:34:49.526371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.082 [2024-10-07 11:34:49.545043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.082 [2024-10-07 11:34:49.545101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:08.082 [2024-10-07 11:34:49.545117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.674 ms 00:24:08.082 [2024-10-07 11:34:49.545128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.082 [2024-10-07 11:34:49.564358] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:24:08.082 [2024-10-07 11:34:49.564423] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:08.082 [2024-10-07 11:34:49.564443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.082 [2024-10-07 11:34:49.564454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:08.082 [2024-10-07 11:34:49.564467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.169 ms 00:24:08.082 [2024-10-07 11:34:49.564478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.082 [2024-10-07 11:34:49.594372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.082 [2024-10-07 11:34:49.594439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:08.082 [2024-10-07 11:34:49.594462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.824 ms 00:24:08.082 [2024-10-07 11:34:49.594473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.082 [2024-10-07 11:34:49.612507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.082 [2024-10-07 11:34:49.612556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:08.082 [2024-10-07 11:34:49.612570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.942 ms 00:24:08.082 [2024-10-07 11:34:49.612581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.082 [2024-10-07 11:34:49.631764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.082 [2024-10-07 11:34:49.631838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:08.082 [2024-10-07 11:34:49.631855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.122 ms 00:24:08.082 [2024-10-07 11:34:49.631866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.082 [2024-10-07 11:34:49.632716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.082 [2024-10-07 11:34:49.632762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:08.082 [2024-10-07 11:34:49.632776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.666 ms 00:24:08.082 [2024-10-07 11:34:49.632786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.082 [2024-10-07 11:34:49.720987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.082 [2024-10-07 11:34:49.721065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:08.082 [2024-10-07 11:34:49.721084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.307 ms 00:24:08.082 [2024-10-07 11:34:49.721095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.082 [2024-10-07 11:34:49.735453] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:24:08.082 [2024-10-07 11:34:49.752232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.082 [2024-10-07 11:34:49.752298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:08.082 [2024-10-07 11:34:49.752314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.001 ms 00:24:08.082 [2024-10-07 11:34:49.752325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.082 [2024-10-07 11:34:49.752463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.082 [2024-10-07 11:34:49.752478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:08.082 [2024-10-07 11:34:49.752490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:08.082 [2024-10-07 11:34:49.752501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.082 [2024-10-07 11:34:49.752566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.082 [2024-10-07 11:34:49.752582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:08.082 [2024-10-07 11:34:49.752593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:24:08.082 [2024-10-07 11:34:49.752603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.082 [2024-10-07 11:34:49.752630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.082 [2024-10-07 11:34:49.752642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:08.082 [2024-10-07 11:34:49.752652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:08.082 [2024-10-07 11:34:49.752663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.082 [2024-10-07 11:34:49.752700] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:08.082 [2024-10-07 11:34:49.752714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.082 [2024-10-07 11:34:49.752724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:08.082 [2024-10-07 11:34:49.752755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:24:08.082 [2024-10-07 11:34:49.752767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.082 [2024-10-07 11:34:49.790213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.082 [2024-10-07 11:34:49.790302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:08.082 [2024-10-07 11:34:49.790320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.476 ms 00:24:08.082 [2024-10-07 11:34:49.790331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.082 [2024-10-07 11:34:49.790539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.082 [2024-10-07 11:34:49.790558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:08.082 [2024-10-07 11:34:49.790570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:24:08.082 [2024-10-07 11:34:49.790580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.082 [2024-10-07 11:34:49.791660] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:08.341 [2024-10-07 11:34:49.796679] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 404.508 ms, result 0 00:24:08.341 [2024-10-07 11:34:49.797626] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:08.341 [2024-10-07 11:34:49.816145] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:08.341  [2024-10-07T11:34:50.052Z] Copying: 4096/4096 [kB] (average 25 MBps)[2024-10-07 11:34:49.980144] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:08.341 [2024-10-07 11:34:49.994546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.341 [2024-10-07 11:34:49.994587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:08.341 [2024-10-07 11:34:49.994603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:08.341 [2024-10-07 11:34:49.994614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.341 [2024-10-07 11:34:49.994638] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:24:08.342 [2024-10-07 11:34:49.998812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.342 [2024-10-07 11:34:49.998842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:08.342 [2024-10-07 11:34:49.998854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.164 ms 00:24:08.342 [2024-10-07 11:34:49.998864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.342 [2024-10-07 11:34:50.000870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.342 [2024-10-07 11:34:50.000915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:08.342 [2024-10-07 11:34:50.000929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.983 ms 00:24:08.342 [2024-10-07 11:34:50.000939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.342 [2024-10-07 11:34:50.004468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.342 [2024-10-07 11:34:50.004499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:08.342 [2024-10-07 11:34:50.004512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.499 ms 00:24:08.342 [2024-10-07 11:34:50.004522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.342 [2024-10-07 11:34:50.010212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.342 [2024-10-07 11:34:50.010263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:08.342 [2024-10-07 11:34:50.010288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.653 ms 00:24:08.342 [2024-10-07 11:34:50.010298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.342 [2024-10-07 11:34:50.046775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.342 [2024-10-07 11:34:50.046817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:08.342 [2024-10-07 11:34:50.046832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.487 ms 00:24:08.342 [2024-10-07 11:34:50.046843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.602 [2024-10-07 11:34:50.067617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.602 [2024-10-07 11:34:50.067658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:08.602 [2024-10-07 11:34:50.067672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.749 ms 00:24:08.602 [2024-10-07 11:34:50.067683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.602 [2024-10-07 11:34:50.067829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.602 [2024-10-07 11:34:50.067843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:08.602 [2024-10-07 11:34:50.067854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:24:08.602 [2024-10-07 11:34:50.067864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.602 [2024-10-07 11:34:50.104102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.602 [2024-10-07 11:34:50.104141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:08.602 [2024-10-07 11:34:50.104155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.271 ms 00:24:08.602 [2024-10-07 11:34:50.104165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.602 [2024-10-07 11:34:50.140519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.602 [2024-10-07 11:34:50.140568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:08.602 [2024-10-07 11:34:50.140584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.356 ms 00:24:08.602 [2024-10-07 11:34:50.140594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.602 [2024-10-07 11:34:50.175906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.602 [2024-10-07 11:34:50.175950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:08.602 [2024-10-07 11:34:50.175964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.286 ms 00:24:08.602 [2024-10-07 11:34:50.175975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.602 [2024-10-07 11:34:50.211374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.602 [2024-10-07 11:34:50.211413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:08.602 [2024-10-07 11:34:50.211427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.372 ms 00:24:08.602 [2024-10-07 11:34:50.211437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.602 [2024-10-07 11:34:50.211492] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:08.602 [2024-10-07 11:34:50.211511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:08.602 [2024-10-07 11:34:50.211524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:08.602 [2024-10-07 11:34:50.211535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:08.602 [2024-10-07 11:34:50.211546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:08.602 [2024-10-07 11:34:50.211558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:08.602 [2024-10-07 11:34:50.211569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:08.602 [2024-10-07 11:34:50.211580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:08.602 [2024-10-07 11:34:50.211591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:08.602 [2024-10-07 11:34:50.211601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:08.602 [2024-10-07 11:34:50.211612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:08.602 [2024-10-07 11:34:50.211623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:08.602 [2024-10-07 11:34:50.211633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:08.602 [2024-10-07 11:34:50.211643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:08.602 [2024-10-07 11:34:50.211653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:08.602 [2024-10-07 11:34:50.211663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:08.602 [2024-10-07 11:34:50.211674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:08.602 [2024-10-07 11:34:50.211685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:08.602 [2024-10-07 11:34:50.211695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:08.602 [2024-10-07 11:34:50.211705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:08.602 [2024-10-07 11:34:50.211715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:08.602 [2024-10-07 11:34:50.211726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:08.602 [2024-10-07 11:34:50.211736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:08.602 [2024-10-07 11:34:50.211756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:08.602 [2024-10-07 11:34:50.211767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:08.602 [2024-10-07 11:34:50.211777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:08.602 [2024-10-07 11:34:50.211787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:08.602 [2024-10-07 11:34:50.211798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:08.602 [2024-10-07 11:34:50.211808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:08.602 [2024-10-07 11:34:50.211819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:08.602 [2024-10-07 11:34:50.211829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:08.602 [2024-10-07 11:34:50.211841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:08.602 [2024-10-07 11:34:50.211851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:08.602 [2024-10-07 11:34:50.211863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:08.602 [2024-10-07 11:34:50.211874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:08.602 [2024-10-07 11:34:50.211885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:08.602 [2024-10-07 11:34:50.211895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:08.602 [2024-10-07 11:34:50.211906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:08.602 [2024-10-07 11:34:50.211917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:08.602 [2024-10-07 11:34:50.211927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:08.602 [2024-10-07 11:34:50.211937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:08.602 [2024-10-07 11:34:50.211948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:08.602 [2024-10-07 11:34:50.211959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:08.602 [2024-10-07 11:34:50.211969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:08.602 [2024-10-07 11:34:50.211980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:08.602 [2024-10-07 11:34:50.211991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:08.602 [2024-10-07 11:34:50.212001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:08.602 [2024-10-07 11:34:50.212011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:08.602 [2024-10-07 11:34:50.212022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:08.602 [2024-10-07 11:34:50.212032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:08.602 [2024-10-07 11:34:50.212042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:08.602 [2024-10-07 11:34:50.212052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:08.602 [2024-10-07 11:34:50.212063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:08.602 [2024-10-07 11:34:50.212073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:08.602 [2024-10-07 11:34:50.212083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:08.602 [2024-10-07 11:34:50.212094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:08.602 [2024-10-07 11:34:50.212104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:08.602 [2024-10-07 11:34:50.212115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:08.602 [2024-10-07 11:34:50.212125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:08.602 [2024-10-07 11:34:50.212136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:08.602 [2024-10-07 11:34:50.212146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:08.602 [2024-10-07 11:34:50.212156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:08.602 [2024-10-07 11:34:50.212167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:08.602 [2024-10-07 11:34:50.212178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:08.603 [2024-10-07 11:34:50.212188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:08.603 [2024-10-07 11:34:50.212199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:08.603 [2024-10-07 11:34:50.212210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:08.603 [2024-10-07 11:34:50.212221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:08.603 [2024-10-07 11:34:50.212232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:08.603 [2024-10-07 11:34:50.212242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:08.603 [2024-10-07 11:34:50.212253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:08.603 [2024-10-07 11:34:50.212263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:08.603 [2024-10-07 11:34:50.212273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:08.603 [2024-10-07 11:34:50.212284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:08.603 [2024-10-07 11:34:50.212294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:08.603 [2024-10-07 11:34:50.212305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:08.603 [2024-10-07 11:34:50.212315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:08.603 [2024-10-07 11:34:50.212325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:08.603 [2024-10-07 11:34:50.212336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:08.603 [2024-10-07 11:34:50.212347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:08.603 [2024-10-07 11:34:50.212357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:08.603 [2024-10-07 11:34:50.212367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:08.603 [2024-10-07 11:34:50.212378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:08.603 [2024-10-07 11:34:50.212389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:08.603 [2024-10-07 11:34:50.212399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:08.603 [2024-10-07 11:34:50.212410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:08.603 [2024-10-07 11:34:50.212421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:08.603 [2024-10-07 11:34:50.212431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:08.603 [2024-10-07 11:34:50.212441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:08.603 [2024-10-07 11:34:50.212452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:08.603 [2024-10-07 11:34:50.212462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:08.603 [2024-10-07 11:34:50.212472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:08.603 [2024-10-07 11:34:50.212483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:08.603 [2024-10-07 11:34:50.212494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:08.603 [2024-10-07 11:34:50.212505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:08.603 [2024-10-07 11:34:50.212516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:08.603 [2024-10-07 11:34:50.212527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:08.603 [2024-10-07 11:34:50.212538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:08.603 [2024-10-07 11:34:50.212549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:08.603 [2024-10-07 11:34:50.212559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:08.603 [2024-10-07 11:34:50.212584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:08.603 [2024-10-07 11:34:50.212602] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:08.603 [2024-10-07 11:34:50.212612] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c659e091-29a7-4fa5-b97b-7822cea8a8a4 00:24:08.603 [2024-10-07 11:34:50.212623] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:08.603 [2024-10-07 11:34:50.212633] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:08.603 [2024-10-07 11:34:50.212645] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:08.603 [2024-10-07 11:34:50.212660] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:08.603 [2024-10-07 11:34:50.212669] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:08.603 [2024-10-07 11:34:50.212680] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:08.603 [2024-10-07 11:34:50.212690] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:08.603 [2024-10-07 11:34:50.212699] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:08.603 [2024-10-07 11:34:50.212708] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:08.603 [2024-10-07 11:34:50.212718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.603 [2024-10-07 11:34:50.212728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:08.603 [2024-10-07 11:34:50.212748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.229 ms 00:24:08.603 [2024-10-07 11:34:50.212760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.603 [2024-10-07 11:34:50.232214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.603 [2024-10-07 11:34:50.232256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:08.603 [2024-10-07 11:34:50.232268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.465 ms 00:24:08.603 [2024-10-07 11:34:50.232278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.603 [2024-10-07 11:34:50.232822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.603 [2024-10-07 11:34:50.232850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:08.603 [2024-10-07 11:34:50.232862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.501 ms 00:24:08.603 [2024-10-07 11:34:50.232872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.603 [2024-10-07 11:34:50.281450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:08.603 [2024-10-07 11:34:50.281504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:08.603 [2024-10-07 11:34:50.281520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:08.603 [2024-10-07 11:34:50.281530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.603 [2024-10-07 11:34:50.281659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:08.603 [2024-10-07 11:34:50.281672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:08.603 [2024-10-07 11:34:50.281683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:08.603 [2024-10-07 11:34:50.281694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.603 [2024-10-07 11:34:50.281765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:08.603 [2024-10-07 11:34:50.281784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:08.603 [2024-10-07 11:34:50.281796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:08.603 [2024-10-07 11:34:50.281805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.603 [2024-10-07 11:34:50.281826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:08.603 [2024-10-07 11:34:50.281836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:08.603 [2024-10-07 11:34:50.281847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:08.603 [2024-10-07 11:34:50.281857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.862 [2024-10-07 11:34:50.408358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:08.862 [2024-10-07 11:34:50.408429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:08.862 [2024-10-07 11:34:50.408444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:08.862 [2024-10-07 11:34:50.408455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.862 [2024-10-07 11:34:50.512391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:08.862 [2024-10-07 11:34:50.512447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:08.862 [2024-10-07 11:34:50.512462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:08.862 [2024-10-07 11:34:50.512473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.862 [2024-10-07 11:34:50.512574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:08.862 [2024-10-07 11:34:50.512587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:08.862 [2024-10-07 11:34:50.512604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:08.862 [2024-10-07 11:34:50.512614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.862 [2024-10-07 11:34:50.512645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:08.862 [2024-10-07 11:34:50.512656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:08.862 [2024-10-07 11:34:50.512666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:08.862 [2024-10-07 11:34:50.512677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.862 [2024-10-07 11:34:50.512808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:08.862 [2024-10-07 11:34:50.512822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:08.862 [2024-10-07 11:34:50.512834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:08.862 [2024-10-07 11:34:50.512849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.862 [2024-10-07 11:34:50.512888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:08.862 [2024-10-07 11:34:50.512900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:08.862 [2024-10-07 11:34:50.512911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:08.862 [2024-10-07 11:34:50.512921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.862 [2024-10-07 11:34:50.512959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:08.862 [2024-10-07 11:34:50.512970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:08.862 [2024-10-07 11:34:50.512980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:08.862 [2024-10-07 11:34:50.512994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.862 [2024-10-07 11:34:50.513038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:08.862 [2024-10-07 11:34:50.513050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:08.862 [2024-10-07 11:34:50.513060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:08.862 [2024-10-07 11:34:50.513070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.862 [2024-10-07 11:34:50.513213] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 519.499 ms, result 0 00:24:10.243 00:24:10.243 00:24:10.243 11:34:51 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=76716 00:24:10.243 11:34:51 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:24:10.243 11:34:51 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 76716 00:24:10.243 11:34:51 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 76716 ']' 00:24:10.243 11:34:51 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:10.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:10.243 11:34:51 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:10.243 11:34:51 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:10.243 11:34:51 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:10.243 11:34:51 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:24:10.243 [2024-10-07 11:34:51.829722] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:24:10.243 [2024-10-07 11:34:51.829879] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76716 ] 00:24:10.502 [2024-10-07 11:34:52.001147] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:10.761 [2024-10-07 11:34:52.224444] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:24:11.699 11:34:53 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:11.699 11:34:53 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:24:11.699 11:34:53 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:24:11.699 [2024-10-07 11:34:53.352010] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:11.699 [2024-10-07 11:34:53.352072] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:11.959 [2024-10-07 11:34:53.536776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.959 [2024-10-07 11:34:53.536950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:11.959 [2024-10-07 11:34:53.537010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:11.959 [2024-10-07 11:34:53.537082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.959 [2024-10-07 11:34:53.540844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.959 [2024-10-07 11:34:53.540967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:11.959 [2024-10-07 11:34:53.541022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.684 ms 00:24:11.959 [2024-10-07 11:34:53.541062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.959 [2024-10-07 11:34:53.541321] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:11.959 [2024-10-07 11:34:53.542435] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:11.959 [2024-10-07 11:34:53.542557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.959 [2024-10-07 11:34:53.542614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:11.959 [2024-10-07 11:34:53.542687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.247 ms 00:24:11.959 [2024-10-07 11:34:53.542768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.959 [2024-10-07 11:34:53.544284] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:11.959 [2024-10-07 11:34:53.563481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.959 [2024-10-07 11:34:53.563583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:11.959 [2024-10-07 11:34:53.563632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.233 ms 00:24:11.959 [2024-10-07 11:34:53.563676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.959 [2024-10-07 11:34:53.563812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.959 [2024-10-07 11:34:53.563880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:11.959 [2024-10-07 11:34:53.563922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:24:11.959 [2024-10-07 11:34:53.563988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.959 [2024-10-07 11:34:53.570747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.959 [2024-10-07 11:34:53.570834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:11.959 [2024-10-07 11:34:53.570879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.675 ms 00:24:11.959 [2024-10-07 11:34:53.570935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.959 [2024-10-07 11:34:53.571096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.959 [2024-10-07 11:34:53.571148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:11.959 [2024-10-07 11:34:53.571199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:24:11.959 [2024-10-07 11:34:53.571260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.959 [2024-10-07 11:34:53.571323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.959 [2024-10-07 11:34:53.571368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:11.959 [2024-10-07 11:34:53.571382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:11.959 [2024-10-07 11:34:53.571395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.960 [2024-10-07 11:34:53.571422] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:24:11.960 [2024-10-07 11:34:53.576343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.960 [2024-10-07 11:34:53.576378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:11.960 [2024-10-07 11:34:53.576394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.932 ms 00:24:11.960 [2024-10-07 11:34:53.576407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.960 [2024-10-07 11:34:53.576482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.960 [2024-10-07 11:34:53.576494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:11.960 [2024-10-07 11:34:53.576509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:11.960 [2024-10-07 11:34:53.576520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.960 [2024-10-07 11:34:53.576545] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:11.960 [2024-10-07 11:34:53.576569] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:11.960 [2024-10-07 11:34:53.576615] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:11.960 [2024-10-07 11:34:53.576639] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:11.960 [2024-10-07 11:34:53.576732] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:11.960 [2024-10-07 11:34:53.576763] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:11.960 [2024-10-07 11:34:53.576781] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:11.960 [2024-10-07 11:34:53.576794] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:11.960 [2024-10-07 11:34:53.576809] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:11.960 [2024-10-07 11:34:53.576820] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:24:11.960 [2024-10-07 11:34:53.576833] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:11.960 [2024-10-07 11:34:53.576843] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:11.960 [2024-10-07 11:34:53.576859] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:11.960 [2024-10-07 11:34:53.576874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.960 [2024-10-07 11:34:53.576887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:11.960 [2024-10-07 11:34:53.576897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.335 ms 00:24:11.960 [2024-10-07 11:34:53.576910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.960 [2024-10-07 11:34:53.576985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.960 [2024-10-07 11:34:53.576999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:11.960 [2024-10-07 11:34:53.577009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:24:11.960 [2024-10-07 11:34:53.577022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.960 [2024-10-07 11:34:53.577111] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:11.960 [2024-10-07 11:34:53.577129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:11.960 [2024-10-07 11:34:53.577141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:11.960 [2024-10-07 11:34:53.577154] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:11.960 [2024-10-07 11:34:53.577164] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:11.960 [2024-10-07 11:34:53.577176] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:11.960 [2024-10-07 11:34:53.577185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:24:11.960 [2024-10-07 11:34:53.577201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:11.960 [2024-10-07 11:34:53.577211] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:24:11.960 [2024-10-07 11:34:53.577223] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:11.960 [2024-10-07 11:34:53.577232] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:11.960 [2024-10-07 11:34:53.577245] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:24:11.960 [2024-10-07 11:34:53.577254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:11.960 [2024-10-07 11:34:53.577266] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:11.960 [2024-10-07 11:34:53.577275] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:24:11.960 [2024-10-07 11:34:53.577288] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:11.960 [2024-10-07 11:34:53.577297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:11.960 [2024-10-07 11:34:53.577309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:24:11.960 [2024-10-07 11:34:53.577327] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:11.960 [2024-10-07 11:34:53.577340] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:11.960 [2024-10-07 11:34:53.577349] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:24:11.960 [2024-10-07 11:34:53.577361] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:11.960 [2024-10-07 11:34:53.577370] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:11.960 [2024-10-07 11:34:53.577384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:24:11.960 [2024-10-07 11:34:53.577393] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:11.960 [2024-10-07 11:34:53.577405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:11.960 [2024-10-07 11:34:53.577414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:24:11.960 [2024-10-07 11:34:53.577426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:11.960 [2024-10-07 11:34:53.577435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:11.960 [2024-10-07 11:34:53.577447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:24:11.960 [2024-10-07 11:34:53.577456] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:11.960 [2024-10-07 11:34:53.577468] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:11.960 [2024-10-07 11:34:53.577477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:24:11.960 [2024-10-07 11:34:53.577490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:11.960 [2024-10-07 11:34:53.577500] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:11.960 [2024-10-07 11:34:53.577511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:24:11.960 [2024-10-07 11:34:53.577521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:11.960 [2024-10-07 11:34:53.577532] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:11.960 [2024-10-07 11:34:53.577541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:24:11.960 [2024-10-07 11:34:53.577555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:11.960 [2024-10-07 11:34:53.577564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:11.960 [2024-10-07 11:34:53.577576] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:24:11.960 [2024-10-07 11:34:53.577586] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:11.960 [2024-10-07 11:34:53.577597] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:11.960 [2024-10-07 11:34:53.577607] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:11.960 [2024-10-07 11:34:53.577619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:11.960 [2024-10-07 11:34:53.577629] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:11.960 [2024-10-07 11:34:53.577642] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:11.960 [2024-10-07 11:34:53.577652] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:11.960 [2024-10-07 11:34:53.577663] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:11.960 [2024-10-07 11:34:53.577673] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:11.960 [2024-10-07 11:34:53.577684] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:11.960 [2024-10-07 11:34:53.577694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:11.960 [2024-10-07 11:34:53.577707] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:11.960 [2024-10-07 11:34:53.577719] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:11.960 [2024-10-07 11:34:53.577747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:24:11.960 [2024-10-07 11:34:53.577758] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:24:11.960 [2024-10-07 11:34:53.577771] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:24:11.960 [2024-10-07 11:34:53.577782] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:24:11.960 [2024-10-07 11:34:53.577796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:24:11.960 [2024-10-07 11:34:53.577806] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:24:11.960 [2024-10-07 11:34:53.577818] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:24:11.960 [2024-10-07 11:34:53.577829] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:24:11.960 [2024-10-07 11:34:53.577841] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:24:11.960 [2024-10-07 11:34:53.577851] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:24:11.960 [2024-10-07 11:34:53.577864] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:24:11.960 [2024-10-07 11:34:53.577874] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:24:11.960 [2024-10-07 11:34:53.577886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:24:11.960 [2024-10-07 11:34:53.577897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:24:11.960 [2024-10-07 11:34:53.577909] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:11.960 [2024-10-07 11:34:53.577921] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:11.960 [2024-10-07 11:34:53.577940] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:11.961 [2024-10-07 11:34:53.577950] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:11.961 [2024-10-07 11:34:53.577963] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:11.961 [2024-10-07 11:34:53.577973] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:11.961 [2024-10-07 11:34:53.577986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.961 [2024-10-07 11:34:53.577997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:11.961 [2024-10-07 11:34:53.578010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.928 ms 00:24:11.961 [2024-10-07 11:34:53.578020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.961 [2024-10-07 11:34:53.617454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.961 [2024-10-07 11:34:53.617495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:11.961 [2024-10-07 11:34:53.617514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.429 ms 00:24:11.961 [2024-10-07 11:34:53.617525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.961 [2024-10-07 11:34:53.617657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.961 [2024-10-07 11:34:53.617670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:11.961 [2024-10-07 11:34:53.617686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:24:11.961 [2024-10-07 11:34:53.617697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.220 [2024-10-07 11:34:53.671448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.220 [2024-10-07 11:34:53.671493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:12.220 [2024-10-07 11:34:53.671520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.803 ms 00:24:12.220 [2024-10-07 11:34:53.671535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.220 [2024-10-07 11:34:53.671657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.220 [2024-10-07 11:34:53.671675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:12.220 [2024-10-07 11:34:53.671696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:12.220 [2024-10-07 11:34:53.671716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.220 [2024-10-07 11:34:53.672202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.220 [2024-10-07 11:34:53.672228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:12.220 [2024-10-07 11:34:53.672249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.437 ms 00:24:12.220 [2024-10-07 11:34:53.672262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.220 [2024-10-07 11:34:53.672422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.220 [2024-10-07 11:34:53.672440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:12.220 [2024-10-07 11:34:53.672459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:24:12.220 [2024-10-07 11:34:53.672472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.220 [2024-10-07 11:34:53.695352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.220 [2024-10-07 11:34:53.695391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:12.220 [2024-10-07 11:34:53.695410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.872 ms 00:24:12.220 [2024-10-07 11:34:53.695425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.220 [2024-10-07 11:34:53.714867] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:12.220 [2024-10-07 11:34:53.714921] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:12.221 [2024-10-07 11:34:53.714943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.221 [2024-10-07 11:34:53.714956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:12.221 [2024-10-07 11:34:53.714974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.428 ms 00:24:12.221 [2024-10-07 11:34:53.714985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.221 [2024-10-07 11:34:53.745783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.221 [2024-10-07 11:34:53.745868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:12.221 [2024-10-07 11:34:53.745890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.714 ms 00:24:12.221 [2024-10-07 11:34:53.745917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.221 [2024-10-07 11:34:53.766225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.221 [2024-10-07 11:34:53.766304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:12.221 [2024-10-07 11:34:53.766334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.166 ms 00:24:12.221 [2024-10-07 11:34:53.766345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.221 [2024-10-07 11:34:53.786080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.221 [2024-10-07 11:34:53.786153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:12.221 [2024-10-07 11:34:53.786176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.608 ms 00:24:12.221 [2024-10-07 11:34:53.786187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.221 [2024-10-07 11:34:53.787150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.221 [2024-10-07 11:34:53.787182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:12.221 [2024-10-07 11:34:53.787201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.759 ms 00:24:12.221 [2024-10-07 11:34:53.787212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.221 [2024-10-07 11:34:53.877557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.221 [2024-10-07 11:34:53.877618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:12.221 [2024-10-07 11:34:53.877641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 90.440 ms 00:24:12.221 [2024-10-07 11:34:53.877657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.221 [2024-10-07 11:34:53.889048] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:24:12.221 [2024-10-07 11:34:53.905454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.221 [2024-10-07 11:34:53.905520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:12.221 [2024-10-07 11:34:53.905537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.704 ms 00:24:12.221 [2024-10-07 11:34:53.905554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.221 [2024-10-07 11:34:53.905705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.221 [2024-10-07 11:34:53.905724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:12.221 [2024-10-07 11:34:53.905736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:12.221 [2024-10-07 11:34:53.905766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.221 [2024-10-07 11:34:53.905831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.221 [2024-10-07 11:34:53.905848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:12.221 [2024-10-07 11:34:53.905860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:24:12.221 [2024-10-07 11:34:53.905874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.221 [2024-10-07 11:34:53.905901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.221 [2024-10-07 11:34:53.905917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:12.221 [2024-10-07 11:34:53.905928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:12.221 [2024-10-07 11:34:53.905953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.221 [2024-10-07 11:34:53.905994] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:12.221 [2024-10-07 11:34:53.906022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.221 [2024-10-07 11:34:53.906033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:12.221 [2024-10-07 11:34:53.906048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:24:12.221 [2024-10-07 11:34:53.906058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.479 [2024-10-07 11:34:53.942245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.479 [2024-10-07 11:34:53.942300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:12.479 [2024-10-07 11:34:53.942324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.212 ms 00:24:12.479 [2024-10-07 11:34:53.942335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.479 [2024-10-07 11:34:53.942471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.479 [2024-10-07 11:34:53.942486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:12.479 [2024-10-07 11:34:53.942502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:24:12.479 [2024-10-07 11:34:53.942512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.479 [2024-10-07 11:34:53.943620] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:12.479 [2024-10-07 11:34:53.948069] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 407.133 ms, result 0 00:24:12.479 [2024-10-07 11:34:53.949470] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:12.479 Some configs were skipped because the RPC state that can call them passed over. 00:24:12.480 11:34:54 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:24:12.738 [2024-10-07 11:34:54.204837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.738 [2024-10-07 11:34:54.204912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:24:12.738 [2024-10-07 11:34:54.204935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.565 ms 00:24:12.738 [2024-10-07 11:34:54.204951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.738 [2024-10-07 11:34:54.204993] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.726 ms, result 0 00:24:12.738 true 00:24:12.738 11:34:54 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:24:12.738 [2024-10-07 11:34:54.412202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.738 [2024-10-07 11:34:54.412261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:24:12.738 [2024-10-07 11:34:54.412282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.071 ms 00:24:12.738 [2024-10-07 11:34:54.412293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.738 [2024-10-07 11:34:54.412344] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.217 ms, result 0 00:24:12.738 true 00:24:12.738 11:34:54 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 76716 00:24:12.738 11:34:54 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 76716 ']' 00:24:12.738 11:34:54 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 76716 00:24:12.738 11:34:54 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:24:12.738 11:34:54 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:12.738 11:34:54 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76716 00:24:12.997 killing process with pid 76716 00:24:12.997 11:34:54 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:12.997 11:34:54 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:12.997 11:34:54 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76716' 00:24:12.997 11:34:54 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 76716 00:24:12.997 11:34:54 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 76716 00:24:13.932 [2024-10-07 11:34:55.592257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.933 [2024-10-07 11:34:55.592313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:13.933 [2024-10-07 11:34:55.592329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:13.933 [2024-10-07 11:34:55.592341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.933 [2024-10-07 11:34:55.592365] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:24:13.933 [2024-10-07 11:34:55.596668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.933 [2024-10-07 11:34:55.596703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:13.933 [2024-10-07 11:34:55.596722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.287 ms 00:24:13.933 [2024-10-07 11:34:55.596732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.933 [2024-10-07 11:34:55.596993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.933 [2024-10-07 11:34:55.597012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:13.933 [2024-10-07 11:34:55.597025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.207 ms 00:24:13.933 [2024-10-07 11:34:55.597042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.933 [2024-10-07 11:34:55.600345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.933 [2024-10-07 11:34:55.600381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:13.933 [2024-10-07 11:34:55.600396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.285 ms 00:24:13.933 [2024-10-07 11:34:55.600407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.933 [2024-10-07 11:34:55.606063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.933 [2024-10-07 11:34:55.606097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:13.933 [2024-10-07 11:34:55.606114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.623 ms 00:24:13.933 [2024-10-07 11:34:55.606128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.933 [2024-10-07 11:34:55.622053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.933 [2024-10-07 11:34:55.622120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:13.933 [2024-10-07 11:34:55.622145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.882 ms 00:24:13.933 [2024-10-07 11:34:55.622154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.933 [2024-10-07 11:34:55.632773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.933 [2024-10-07 11:34:55.632826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:13.933 [2024-10-07 11:34:55.632845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.551 ms 00:24:13.933 [2024-10-07 11:34:55.632867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.933 [2024-10-07 11:34:55.633024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.933 [2024-10-07 11:34:55.633039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:13.933 [2024-10-07 11:34:55.633052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:24:13.933 [2024-10-07 11:34:55.633066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.192 [2024-10-07 11:34:55.649094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.192 [2024-10-07 11:34:55.649132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:14.192 [2024-10-07 11:34:55.649154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.018 ms 00:24:14.192 [2024-10-07 11:34:55.649165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.192 [2024-10-07 11:34:55.664320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.192 [2024-10-07 11:34:55.664353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:14.192 [2024-10-07 11:34:55.664380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.120 ms 00:24:14.192 [2024-10-07 11:34:55.664390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.193 [2024-10-07 11:34:55.679015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.193 [2024-10-07 11:34:55.679050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:14.193 [2024-10-07 11:34:55.679068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.589 ms 00:24:14.193 [2024-10-07 11:34:55.679078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.193 [2024-10-07 11:34:55.693523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.193 [2024-10-07 11:34:55.693558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:14.193 [2024-10-07 11:34:55.693577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.381 ms 00:24:14.193 [2024-10-07 11:34:55.693587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.193 [2024-10-07 11:34:55.693643] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:14.193 [2024-10-07 11:34:55.693667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.693685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.693696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.693712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.693723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.693753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.693765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.693781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.693791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.693809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.693820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.693836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.693847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.693862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.693873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.693889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.693900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.693916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.693927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.693943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.693953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.693974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.693985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.694001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.694012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.694028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.694039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.694054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.694066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.694082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.694093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.694109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.694121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.694137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.694148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.694163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.694174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.694197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.694209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.694225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.694236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.694253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.694264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.694287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.694299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.694314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.694325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.694340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.694351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.694367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.694378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.694393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.694404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.694424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.694435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.694450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.694461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.694476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.694487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.694502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.694513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.694529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.694539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.694557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.694568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.694584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.694596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.694612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.694623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.694642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.694654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.694669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.694680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.694696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.694707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.694723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.694734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.694756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.694767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.694783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.694794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.694809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.694821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.694836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.694847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:14.193 [2024-10-07 11:34:55.694867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:14.194 [2024-10-07 11:34:55.694878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:14.194 [2024-10-07 11:34:55.694894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:14.194 [2024-10-07 11:34:55.694905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:14.194 [2024-10-07 11:34:55.694922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:14.194 [2024-10-07 11:34:55.694933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:14.194 [2024-10-07 11:34:55.694949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:14.194 [2024-10-07 11:34:55.694960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:14.194 [2024-10-07 11:34:55.694975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:14.194 [2024-10-07 11:34:55.694986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:14.194 [2024-10-07 11:34:55.695003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:14.194 [2024-10-07 11:34:55.695014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:14.194 [2024-10-07 11:34:55.695030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:14.194 [2024-10-07 11:34:55.695041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:14.194 [2024-10-07 11:34:55.695057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:14.194 [2024-10-07 11:34:55.695076] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:14.194 [2024-10-07 11:34:55.695097] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c659e091-29a7-4fa5-b97b-7822cea8a8a4 00:24:14.194 [2024-10-07 11:34:55.695108] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:14.194 [2024-10-07 11:34:55.695123] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:14.194 [2024-10-07 11:34:55.695133] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:14.194 [2024-10-07 11:34:55.695149] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:14.194 [2024-10-07 11:34:55.695171] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:14.194 [2024-10-07 11:34:55.695188] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:14.194 [2024-10-07 11:34:55.695203] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:14.194 [2024-10-07 11:34:55.695216] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:14.194 [2024-10-07 11:34:55.695226] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:14.194 [2024-10-07 11:34:55.695241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.194 [2024-10-07 11:34:55.695251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:14.194 [2024-10-07 11:34:55.695269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.604 ms 00:24:14.194 [2024-10-07 11:34:55.695279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.194 [2024-10-07 11:34:55.715104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.194 [2024-10-07 11:34:55.715137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:14.194 [2024-10-07 11:34:55.715160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.826 ms 00:24:14.194 [2024-10-07 11:34:55.715171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.194 [2024-10-07 11:34:55.715761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.194 [2024-10-07 11:34:55.715786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:14.194 [2024-10-07 11:34:55.715803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.528 ms 00:24:14.194 [2024-10-07 11:34:55.715814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.194 [2024-10-07 11:34:55.778526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:14.194 [2024-10-07 11:34:55.778586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:14.194 [2024-10-07 11:34:55.778607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:14.194 [2024-10-07 11:34:55.778623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.194 [2024-10-07 11:34:55.778769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:14.194 [2024-10-07 11:34:55.778783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:14.194 [2024-10-07 11:34:55.778799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:14.194 [2024-10-07 11:34:55.778810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.194 [2024-10-07 11:34:55.778873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:14.194 [2024-10-07 11:34:55.778886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:14.194 [2024-10-07 11:34:55.778907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:14.194 [2024-10-07 11:34:55.778917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.194 [2024-10-07 11:34:55.778948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:14.194 [2024-10-07 11:34:55.778959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:14.194 [2024-10-07 11:34:55.778974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:14.194 [2024-10-07 11:34:55.778984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.452 [2024-10-07 11:34:55.904417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:14.452 [2024-10-07 11:34:55.904485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:14.452 [2024-10-07 11:34:55.904503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:14.452 [2024-10-07 11:34:55.904515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.452 [2024-10-07 11:34:56.006492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:14.452 [2024-10-07 11:34:56.006557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:14.452 [2024-10-07 11:34:56.006575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:14.452 [2024-10-07 11:34:56.006586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.452 [2024-10-07 11:34:56.006703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:14.452 [2024-10-07 11:34:56.006716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:14.452 [2024-10-07 11:34:56.006733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:14.452 [2024-10-07 11:34:56.006759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.452 [2024-10-07 11:34:56.006794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:14.452 [2024-10-07 11:34:56.006809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:14.452 [2024-10-07 11:34:56.006822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:14.452 [2024-10-07 11:34:56.006832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.452 [2024-10-07 11:34:56.006953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:14.452 [2024-10-07 11:34:56.006966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:14.452 [2024-10-07 11:34:56.006979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:14.452 [2024-10-07 11:34:56.006989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.453 [2024-10-07 11:34:56.007029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:14.453 [2024-10-07 11:34:56.007042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:14.453 [2024-10-07 11:34:56.007058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:14.453 [2024-10-07 11:34:56.007069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.453 [2024-10-07 11:34:56.007111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:14.453 [2024-10-07 11:34:56.007122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:14.453 [2024-10-07 11:34:56.007138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:14.453 [2024-10-07 11:34:56.007148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.453 [2024-10-07 11:34:56.007195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:14.453 [2024-10-07 11:34:56.007210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:14.453 [2024-10-07 11:34:56.007223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:14.453 [2024-10-07 11:34:56.007233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.453 [2024-10-07 11:34:56.007381] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 415.768 ms, result 0 00:24:15.867 11:34:57 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:15.867 [2024-10-07 11:34:57.269857] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:24:15.867 [2024-10-07 11:34:57.270177] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76780 ] 00:24:15.867 [2024-10-07 11:34:57.442398] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:16.126 [2024-10-07 11:34:57.657676] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:24:16.384 [2024-10-07 11:34:58.035292] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:16.384 [2024-10-07 11:34:58.035372] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:16.644 [2024-10-07 11:34:58.197511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.644 [2024-10-07 11:34:58.197577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:16.644 [2024-10-07 11:34:58.197597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:16.644 [2024-10-07 11:34:58.197608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.644 [2024-10-07 11:34:58.200789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.644 [2024-10-07 11:34:58.200830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:16.644 [2024-10-07 11:34:58.200843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.163 ms 00:24:16.644 [2024-10-07 11:34:58.200853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.644 [2024-10-07 11:34:58.200956] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:16.644 [2024-10-07 11:34:58.201922] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:16.644 [2024-10-07 11:34:58.201957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.644 [2024-10-07 11:34:58.201968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:16.644 [2024-10-07 11:34:58.201982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.011 ms 00:24:16.644 [2024-10-07 11:34:58.201993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.644 [2024-10-07 11:34:58.203493] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:16.644 [2024-10-07 11:34:58.222819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.644 [2024-10-07 11:34:58.222878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:16.644 [2024-10-07 11:34:58.222894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.357 ms 00:24:16.644 [2024-10-07 11:34:58.222905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.645 [2024-10-07 11:34:58.223013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.645 [2024-10-07 11:34:58.223028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:16.645 [2024-10-07 11:34:58.223043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:24:16.645 [2024-10-07 11:34:58.223053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.645 [2024-10-07 11:34:58.229809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.645 [2024-10-07 11:34:58.229841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:16.645 [2024-10-07 11:34:58.229854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.722 ms 00:24:16.645 [2024-10-07 11:34:58.229864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.645 [2024-10-07 11:34:58.229968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.645 [2024-10-07 11:34:58.229986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:16.645 [2024-10-07 11:34:58.229998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:24:16.645 [2024-10-07 11:34:58.230009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.645 [2024-10-07 11:34:58.230040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.645 [2024-10-07 11:34:58.230052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:16.645 [2024-10-07 11:34:58.230062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:16.645 [2024-10-07 11:34:58.230072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.645 [2024-10-07 11:34:58.230098] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:24:16.645 [2024-10-07 11:34:58.234725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.645 [2024-10-07 11:34:58.234765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:16.645 [2024-10-07 11:34:58.234777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.642 ms 00:24:16.645 [2024-10-07 11:34:58.234787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.645 [2024-10-07 11:34:58.234858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.645 [2024-10-07 11:34:58.234875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:16.645 [2024-10-07 11:34:58.234886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:16.645 [2024-10-07 11:34:58.234896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.645 [2024-10-07 11:34:58.234919] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:16.645 [2024-10-07 11:34:58.234941] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:16.645 [2024-10-07 11:34:58.234978] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:16.645 [2024-10-07 11:34:58.234996] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:16.645 [2024-10-07 11:34:58.235089] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:16.645 [2024-10-07 11:34:58.235102] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:16.645 [2024-10-07 11:34:58.235115] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:16.645 [2024-10-07 11:34:58.235129] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:16.645 [2024-10-07 11:34:58.235141] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:16.645 [2024-10-07 11:34:58.235152] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:24:16.645 [2024-10-07 11:34:58.235162] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:16.645 [2024-10-07 11:34:58.235172] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:16.645 [2024-10-07 11:34:58.235182] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:16.645 [2024-10-07 11:34:58.235192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.645 [2024-10-07 11:34:58.235202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:16.645 [2024-10-07 11:34:58.235225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:24:16.645 [2024-10-07 11:34:58.235235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.645 [2024-10-07 11:34:58.235312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.645 [2024-10-07 11:34:58.235323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:16.645 [2024-10-07 11:34:58.235334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:24:16.645 [2024-10-07 11:34:58.235343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.645 [2024-10-07 11:34:58.235432] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:16.645 [2024-10-07 11:34:58.235445] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:16.645 [2024-10-07 11:34:58.235456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:16.645 [2024-10-07 11:34:58.235470] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:16.645 [2024-10-07 11:34:58.235480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:16.645 [2024-10-07 11:34:58.235490] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:16.645 [2024-10-07 11:34:58.235499] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:24:16.645 [2024-10-07 11:34:58.235508] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:16.645 [2024-10-07 11:34:58.235518] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:24:16.645 [2024-10-07 11:34:58.235528] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:16.645 [2024-10-07 11:34:58.235539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:16.645 [2024-10-07 11:34:58.235557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:24:16.645 [2024-10-07 11:34:58.235567] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:16.645 [2024-10-07 11:34:58.235576] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:16.645 [2024-10-07 11:34:58.235586] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:24:16.645 [2024-10-07 11:34:58.235595] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:16.645 [2024-10-07 11:34:58.235604] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:16.645 [2024-10-07 11:34:58.235613] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:24:16.645 [2024-10-07 11:34:58.235622] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:16.645 [2024-10-07 11:34:58.235632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:16.645 [2024-10-07 11:34:58.235641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:24:16.645 [2024-10-07 11:34:58.235651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:16.645 [2024-10-07 11:34:58.235660] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:16.645 [2024-10-07 11:34:58.235669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:24:16.645 [2024-10-07 11:34:58.235678] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:16.645 [2024-10-07 11:34:58.235687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:16.645 [2024-10-07 11:34:58.235696] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:24:16.645 [2024-10-07 11:34:58.235705] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:16.645 [2024-10-07 11:34:58.235714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:16.645 [2024-10-07 11:34:58.235723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:24:16.645 [2024-10-07 11:34:58.235732] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:16.645 [2024-10-07 11:34:58.235752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:16.645 [2024-10-07 11:34:58.235762] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:24:16.645 [2024-10-07 11:34:58.235771] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:16.645 [2024-10-07 11:34:58.235779] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:16.645 [2024-10-07 11:34:58.235789] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:24:16.645 [2024-10-07 11:34:58.235798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:16.645 [2024-10-07 11:34:58.235807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:16.645 [2024-10-07 11:34:58.235816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:24:16.645 [2024-10-07 11:34:58.235825] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:16.645 [2024-10-07 11:34:58.235834] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:16.645 [2024-10-07 11:34:58.235843] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:24:16.645 [2024-10-07 11:34:58.235853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:16.645 [2024-10-07 11:34:58.235862] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:16.645 [2024-10-07 11:34:58.235872] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:16.645 [2024-10-07 11:34:58.235882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:16.645 [2024-10-07 11:34:58.235891] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:16.645 [2024-10-07 11:34:58.235902] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:16.645 [2024-10-07 11:34:58.235911] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:16.645 [2024-10-07 11:34:58.235920] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:16.645 [2024-10-07 11:34:58.235929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:16.645 [2024-10-07 11:34:58.235938] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:16.645 [2024-10-07 11:34:58.235947] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:16.645 [2024-10-07 11:34:58.235958] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:16.645 [2024-10-07 11:34:58.235970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:16.645 [2024-10-07 11:34:58.235985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:24:16.645 [2024-10-07 11:34:58.235996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:24:16.645 [2024-10-07 11:34:58.236007] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:24:16.645 [2024-10-07 11:34:58.236017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:24:16.646 [2024-10-07 11:34:58.236027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:24:16.646 [2024-10-07 11:34:58.236037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:24:16.646 [2024-10-07 11:34:58.236047] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:24:16.646 [2024-10-07 11:34:58.236057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:24:16.646 [2024-10-07 11:34:58.236068] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:24:16.646 [2024-10-07 11:34:58.236078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:24:16.646 [2024-10-07 11:34:58.236088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:24:16.646 [2024-10-07 11:34:58.236098] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:24:16.646 [2024-10-07 11:34:58.236108] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:24:16.646 [2024-10-07 11:34:58.236118] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:24:16.646 [2024-10-07 11:34:58.236129] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:16.646 [2024-10-07 11:34:58.236139] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:16.646 [2024-10-07 11:34:58.236151] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:16.646 [2024-10-07 11:34:58.236161] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:16.646 [2024-10-07 11:34:58.236171] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:16.646 [2024-10-07 11:34:58.236182] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:16.646 [2024-10-07 11:34:58.236193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.646 [2024-10-07 11:34:58.236207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:16.646 [2024-10-07 11:34:58.236217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.816 ms 00:24:16.646 [2024-10-07 11:34:58.236226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.646 [2024-10-07 11:34:58.282051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.646 [2024-10-07 11:34:58.282269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:16.646 [2024-10-07 11:34:58.282303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.842 ms 00:24:16.646 [2024-10-07 11:34:58.282314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.646 [2024-10-07 11:34:58.282472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.646 [2024-10-07 11:34:58.282486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:16.646 [2024-10-07 11:34:58.282497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:24:16.646 [2024-10-07 11:34:58.282507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.646 [2024-10-07 11:34:58.330030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.646 [2024-10-07 11:34:58.330075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:16.646 [2024-10-07 11:34:58.330091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.575 ms 00:24:16.646 [2024-10-07 11:34:58.330102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.646 [2024-10-07 11:34:58.330229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.646 [2024-10-07 11:34:58.330242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:16.646 [2024-10-07 11:34:58.330254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:16.646 [2024-10-07 11:34:58.330264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.646 [2024-10-07 11:34:58.330711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.646 [2024-10-07 11:34:58.330725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:16.646 [2024-10-07 11:34:58.330736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.414 ms 00:24:16.646 [2024-10-07 11:34:58.330758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.646 [2024-10-07 11:34:58.330884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.646 [2024-10-07 11:34:58.330898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:16.646 [2024-10-07 11:34:58.330908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:24:16.646 [2024-10-07 11:34:58.330919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.646 [2024-10-07 11:34:58.350522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.646 [2024-10-07 11:34:58.350574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:16.646 [2024-10-07 11:34:58.350589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.611 ms 00:24:16.646 [2024-10-07 11:34:58.350602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.905 [2024-10-07 11:34:58.370370] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:16.905 [2024-10-07 11:34:58.370422] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:16.905 [2024-10-07 11:34:58.370439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.905 [2024-10-07 11:34:58.370451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:16.905 [2024-10-07 11:34:58.370463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.689 ms 00:24:16.905 [2024-10-07 11:34:58.370473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.905 [2024-10-07 11:34:58.400618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.905 [2024-10-07 11:34:58.400672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:16.905 [2024-10-07 11:34:58.400694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.074 ms 00:24:16.905 [2024-10-07 11:34:58.400705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.905 [2024-10-07 11:34:58.419177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.905 [2024-10-07 11:34:58.419226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:16.905 [2024-10-07 11:34:58.419241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.385 ms 00:24:16.905 [2024-10-07 11:34:58.419251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.905 [2024-10-07 11:34:58.437493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.905 [2024-10-07 11:34:58.437540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:16.905 [2024-10-07 11:34:58.437555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.180 ms 00:24:16.905 [2024-10-07 11:34:58.437565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.905 [2024-10-07 11:34:58.438367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.905 [2024-10-07 11:34:58.438515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:16.905 [2024-10-07 11:34:58.438538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.667 ms 00:24:16.905 [2024-10-07 11:34:58.438550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.905 [2024-10-07 11:34:58.526510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.905 [2024-10-07 11:34:58.526572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:16.905 [2024-10-07 11:34:58.526590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.063 ms 00:24:16.905 [2024-10-07 11:34:58.526601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.905 [2024-10-07 11:34:58.538891] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:24:16.905 [2024-10-07 11:34:58.555794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.905 [2024-10-07 11:34:58.555846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:16.905 [2024-10-07 11:34:58.555864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.071 ms 00:24:16.905 [2024-10-07 11:34:58.555874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.905 [2024-10-07 11:34:58.556017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.905 [2024-10-07 11:34:58.556031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:16.905 [2024-10-07 11:34:58.556042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:16.905 [2024-10-07 11:34:58.556052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.905 [2024-10-07 11:34:58.556116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.905 [2024-10-07 11:34:58.556130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:16.905 [2024-10-07 11:34:58.556142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:24:16.905 [2024-10-07 11:34:58.556151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.905 [2024-10-07 11:34:58.556175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.905 [2024-10-07 11:34:58.556185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:16.905 [2024-10-07 11:34:58.556196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:16.905 [2024-10-07 11:34:58.556206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.905 [2024-10-07 11:34:58.556245] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:16.905 [2024-10-07 11:34:58.556258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.905 [2024-10-07 11:34:58.556268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:16.905 [2024-10-07 11:34:58.556282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:24:16.905 [2024-10-07 11:34:58.556292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.905 [2024-10-07 11:34:58.592513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.905 [2024-10-07 11:34:58.592560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:16.905 [2024-10-07 11:34:58.592575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.256 ms 00:24:16.905 [2024-10-07 11:34:58.592586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.905 [2024-10-07 11:34:58.592708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.905 [2024-10-07 11:34:58.592730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:16.905 [2024-10-07 11:34:58.592757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:24:16.905 [2024-10-07 11:34:58.592768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.905 [2024-10-07 11:34:58.593686] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:16.905 [2024-10-07 11:34:58.597985] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 396.519 ms, result 0 00:24:16.905 [2024-10-07 11:34:58.598727] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:17.163 [2024-10-07 11:34:58.616960] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:18.095  [2024-10-07T11:35:00.739Z] Copying: 30/256 [MB] (30 MBps) [2024-10-07T11:35:01.707Z] Copying: 57/256 [MB] (26 MBps) [2024-10-07T11:35:03.083Z] Copying: 85/256 [MB] (27 MBps) [2024-10-07T11:35:04.016Z] Copying: 113/256 [MB] (28 MBps) [2024-10-07T11:35:04.977Z] Copying: 141/256 [MB] (27 MBps) [2024-10-07T11:35:05.912Z] Copying: 169/256 [MB] (27 MBps) [2024-10-07T11:35:06.846Z] Copying: 196/256 [MB] (27 MBps) [2024-10-07T11:35:07.778Z] Copying: 222/256 [MB] (26 MBps) [2024-10-07T11:35:08.035Z] Copying: 249/256 [MB] (26 MBps) [2024-10-07T11:35:08.294Z] Copying: 256/256 [MB] (average 27 MBps)[2024-10-07 11:35:08.195609] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:26.583 [2024-10-07 11:35:08.219037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.583 [2024-10-07 11:35:08.219080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:26.583 [2024-10-07 11:35:08.219096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:26.583 [2024-10-07 11:35:08.219107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.583 [2024-10-07 11:35:08.219134] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:24:26.583 [2024-10-07 11:35:08.223234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.583 [2024-10-07 11:35:08.223265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:26.583 [2024-10-07 11:35:08.223279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.090 ms 00:24:26.583 [2024-10-07 11:35:08.223289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.583 [2024-10-07 11:35:08.223536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.583 [2024-10-07 11:35:08.223549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:26.583 [2024-10-07 11:35:08.223565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.219 ms 00:24:26.583 [2024-10-07 11:35:08.223575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.583 [2024-10-07 11:35:08.226438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.583 [2024-10-07 11:35:08.226460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:26.583 [2024-10-07 11:35:08.226473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.851 ms 00:24:26.583 [2024-10-07 11:35:08.226484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.583 [2024-10-07 11:35:08.232152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.583 [2024-10-07 11:35:08.232184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:26.583 [2024-10-07 11:35:08.232202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.657 ms 00:24:26.583 [2024-10-07 11:35:08.232212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.583 [2024-10-07 11:35:08.268259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.583 [2024-10-07 11:35:08.268299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:26.583 [2024-10-07 11:35:08.268314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.036 ms 00:24:26.583 [2024-10-07 11:35:08.268325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.583 [2024-10-07 11:35:08.289349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.583 [2024-10-07 11:35:08.289390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:26.583 [2024-10-07 11:35:08.289404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.997 ms 00:24:26.583 [2024-10-07 11:35:08.289416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.583 [2024-10-07 11:35:08.289556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.583 [2024-10-07 11:35:08.289570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:26.583 [2024-10-07 11:35:08.289583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:24:26.583 [2024-10-07 11:35:08.289592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.841 [2024-10-07 11:35:08.325865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.841 [2024-10-07 11:35:08.325903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:26.841 [2024-10-07 11:35:08.325917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.306 ms 00:24:26.841 [2024-10-07 11:35:08.325927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.841 [2024-10-07 11:35:08.361175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.841 [2024-10-07 11:35:08.361212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:26.841 [2024-10-07 11:35:08.361225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.248 ms 00:24:26.841 [2024-10-07 11:35:08.361236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.841 [2024-10-07 11:35:08.396057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.841 [2024-10-07 11:35:08.396097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:26.841 [2024-10-07 11:35:08.396111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.821 ms 00:24:26.841 [2024-10-07 11:35:08.396121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.841 [2024-10-07 11:35:08.431553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.841 [2024-10-07 11:35:08.431592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:26.841 [2024-10-07 11:35:08.431606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.403 ms 00:24:26.841 [2024-10-07 11:35:08.431616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.841 [2024-10-07 11:35:08.431674] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:26.841 [2024-10-07 11:35:08.431691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:26.841 [2024-10-07 11:35:08.431704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:26.841 [2024-10-07 11:35:08.431715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:26.841 [2024-10-07 11:35:08.431726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:26.841 [2024-10-07 11:35:08.431737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:26.841 [2024-10-07 11:35:08.431762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:26.841 [2024-10-07 11:35:08.431773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:26.841 [2024-10-07 11:35:08.431783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:26.841 [2024-10-07 11:35:08.431794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:26.841 [2024-10-07 11:35:08.431805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:26.841 [2024-10-07 11:35:08.431816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:26.841 [2024-10-07 11:35:08.431826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:26.841 [2024-10-07 11:35:08.431837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:26.841 [2024-10-07 11:35:08.431847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:26.841 [2024-10-07 11:35:08.431859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:26.841 [2024-10-07 11:35:08.431869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:26.841 [2024-10-07 11:35:08.431880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:26.841 [2024-10-07 11:35:08.431890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:26.841 [2024-10-07 11:35:08.431900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:26.841 [2024-10-07 11:35:08.431910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:26.841 [2024-10-07 11:35:08.431921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:26.841 [2024-10-07 11:35:08.431931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:26.841 [2024-10-07 11:35:08.431942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:26.841 [2024-10-07 11:35:08.431952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:26.841 [2024-10-07 11:35:08.431962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:26.841 [2024-10-07 11:35:08.431973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:26.841 [2024-10-07 11:35:08.431983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:26.841 [2024-10-07 11:35:08.431994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:26.841 [2024-10-07 11:35:08.432004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:26.841 [2024-10-07 11:35:08.432015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:26.841 [2024-10-07 11:35:08.432026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:26.841 [2024-10-07 11:35:08.432037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:26.841 [2024-10-07 11:35:08.432049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:26.841 [2024-10-07 11:35:08.432059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:26.841 [2024-10-07 11:35:08.432069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:26.841 [2024-10-07 11:35:08.432080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:26.841 [2024-10-07 11:35:08.432090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:26.841 [2024-10-07 11:35:08.432100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:26.841 [2024-10-07 11:35:08.432110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:26.841 [2024-10-07 11:35:08.432121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:26.841 [2024-10-07 11:35:08.432131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:26.841 [2024-10-07 11:35:08.432142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:26.841 [2024-10-07 11:35:08.432153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:26.841 [2024-10-07 11:35:08.432163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:26.841 [2024-10-07 11:35:08.432174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:26.841 [2024-10-07 11:35:08.432184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:26.841 [2024-10-07 11:35:08.432194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:26.841 [2024-10-07 11:35:08.432205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:26.841 [2024-10-07 11:35:08.432215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:26.841 [2024-10-07 11:35:08.432226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:26.841 [2024-10-07 11:35:08.432236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:26.841 [2024-10-07 11:35:08.432246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:26.841 [2024-10-07 11:35:08.432257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:26.841 [2024-10-07 11:35:08.432267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:26.841 [2024-10-07 11:35:08.432278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:26.841 [2024-10-07 11:35:08.432288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:26.841 [2024-10-07 11:35:08.432298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:26.841 [2024-10-07 11:35:08.432308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:26.841 [2024-10-07 11:35:08.432319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:26.841 [2024-10-07 11:35:08.432331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:26.841 [2024-10-07 11:35:08.432342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:26.841 [2024-10-07 11:35:08.432352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:26.841 [2024-10-07 11:35:08.432362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:26.841 [2024-10-07 11:35:08.432373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:26.841 [2024-10-07 11:35:08.432384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:26.841 [2024-10-07 11:35:08.432395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:26.841 [2024-10-07 11:35:08.432405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:26.841 [2024-10-07 11:35:08.432416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:26.841 [2024-10-07 11:35:08.432426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:26.842 [2024-10-07 11:35:08.432437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:26.842 [2024-10-07 11:35:08.432447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:26.842 [2024-10-07 11:35:08.432458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:26.842 [2024-10-07 11:35:08.432468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:26.842 [2024-10-07 11:35:08.432478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:26.842 [2024-10-07 11:35:08.432489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:26.842 [2024-10-07 11:35:08.432500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:26.842 [2024-10-07 11:35:08.432510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:26.842 [2024-10-07 11:35:08.432520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:26.842 [2024-10-07 11:35:08.432530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:26.842 [2024-10-07 11:35:08.432542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:26.842 [2024-10-07 11:35:08.432552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:26.842 [2024-10-07 11:35:08.432562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:26.842 [2024-10-07 11:35:08.432573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:26.842 [2024-10-07 11:35:08.432584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:26.842 [2024-10-07 11:35:08.432594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:26.842 [2024-10-07 11:35:08.432604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:26.842 [2024-10-07 11:35:08.432615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:26.842 [2024-10-07 11:35:08.432625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:26.842 [2024-10-07 11:35:08.432636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:26.842 [2024-10-07 11:35:08.432646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:26.842 [2024-10-07 11:35:08.432656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:26.842 [2024-10-07 11:35:08.432667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:26.842 [2024-10-07 11:35:08.432678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:26.842 [2024-10-07 11:35:08.432688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:26.842 [2024-10-07 11:35:08.432699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:26.842 [2024-10-07 11:35:08.432710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:26.842 [2024-10-07 11:35:08.432720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:26.842 [2024-10-07 11:35:08.432731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:26.842 [2024-10-07 11:35:08.432749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:26.842 [2024-10-07 11:35:08.432774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:26.842 [2024-10-07 11:35:08.432792] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:26.842 [2024-10-07 11:35:08.432802] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c659e091-29a7-4fa5-b97b-7822cea8a8a4 00:24:26.842 [2024-10-07 11:35:08.432813] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:26.842 [2024-10-07 11:35:08.432824] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:26.842 [2024-10-07 11:35:08.432834] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:26.842 [2024-10-07 11:35:08.432848] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:26.842 [2024-10-07 11:35:08.432858] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:26.842 [2024-10-07 11:35:08.432869] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:26.842 [2024-10-07 11:35:08.432879] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:26.842 [2024-10-07 11:35:08.432888] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:26.842 [2024-10-07 11:35:08.432897] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:26.842 [2024-10-07 11:35:08.432907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.842 [2024-10-07 11:35:08.432917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:26.842 [2024-10-07 11:35:08.432928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.236 ms 00:24:26.842 [2024-10-07 11:35:08.432938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.842 [2024-10-07 11:35:08.452646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.842 [2024-10-07 11:35:08.452686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:26.842 [2024-10-07 11:35:08.452699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.719 ms 00:24:26.842 [2024-10-07 11:35:08.452709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.842 [2024-10-07 11:35:08.453256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.842 [2024-10-07 11:35:08.453274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:26.842 [2024-10-07 11:35:08.453285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.495 ms 00:24:26.842 [2024-10-07 11:35:08.453295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.842 [2024-10-07 11:35:08.499936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:26.842 [2024-10-07 11:35:08.499978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:26.842 [2024-10-07 11:35:08.499991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:26.842 [2024-10-07 11:35:08.500002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.842 [2024-10-07 11:35:08.500086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:26.842 [2024-10-07 11:35:08.500098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:26.842 [2024-10-07 11:35:08.500109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:26.842 [2024-10-07 11:35:08.500119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.842 [2024-10-07 11:35:08.500168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:26.842 [2024-10-07 11:35:08.500185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:26.842 [2024-10-07 11:35:08.500196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:26.842 [2024-10-07 11:35:08.500206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.842 [2024-10-07 11:35:08.500225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:26.842 [2024-10-07 11:35:08.500235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:26.842 [2024-10-07 11:35:08.500244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:26.842 [2024-10-07 11:35:08.500254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.100 [2024-10-07 11:35:08.623833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:27.100 [2024-10-07 11:35:08.623918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:27.100 [2024-10-07 11:35:08.623934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:27.100 [2024-10-07 11:35:08.623945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.100 [2024-10-07 11:35:08.725756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:27.100 [2024-10-07 11:35:08.725817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:27.100 [2024-10-07 11:35:08.725832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:27.100 [2024-10-07 11:35:08.725859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.100 [2024-10-07 11:35:08.725968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:27.100 [2024-10-07 11:35:08.725980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:27.100 [2024-10-07 11:35:08.725990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:27.100 [2024-10-07 11:35:08.726007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.100 [2024-10-07 11:35:08.726037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:27.100 [2024-10-07 11:35:08.726049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:27.100 [2024-10-07 11:35:08.726060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:27.100 [2024-10-07 11:35:08.726070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.100 [2024-10-07 11:35:08.726207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:27.100 [2024-10-07 11:35:08.726221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:27.100 [2024-10-07 11:35:08.726231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:27.100 [2024-10-07 11:35:08.726245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.100 [2024-10-07 11:35:08.726292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:27.100 [2024-10-07 11:35:08.726305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:27.100 [2024-10-07 11:35:08.726315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:27.100 [2024-10-07 11:35:08.726325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.100 [2024-10-07 11:35:08.726366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:27.100 [2024-10-07 11:35:08.726378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:27.100 [2024-10-07 11:35:08.726388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:27.100 [2024-10-07 11:35:08.726397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.100 [2024-10-07 11:35:08.726448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:27.100 [2024-10-07 11:35:08.726468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:27.100 [2024-10-07 11:35:08.726479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:27.100 [2024-10-07 11:35:08.726489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.100 [2024-10-07 11:35:08.726638] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 508.419 ms, result 0 00:24:28.476 00:24:28.476 00:24:28.476 11:35:09 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:28.734 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:24:28.734 11:35:10 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:24:28.734 11:35:10 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:24:28.734 11:35:10 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:28.734 11:35:10 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:28.734 11:35:10 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:24:28.993 11:35:10 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:24:28.993 11:35:10 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 76716 00:24:28.993 11:35:10 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 76716 ']' 00:24:28.993 11:35:10 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 76716 00:24:28.993 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (76716) - No such process 00:24:28.993 Process with pid 76716 is not found 00:24:28.993 11:35:10 ftl.ftl_trim -- common/autotest_common.sh@977 -- # echo 'Process with pid 76716 is not found' 00:24:28.993 00:24:28.993 real 1m8.797s 00:24:28.993 user 1m32.860s 00:24:28.993 sys 0m6.864s 00:24:28.993 11:35:10 ftl.ftl_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:28.993 ************************************ 00:24:28.993 END TEST ftl_trim 00:24:28.993 ************************************ 00:24:28.993 11:35:10 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:24:28.993 11:35:10 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:24:28.993 11:35:10 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:24:28.993 11:35:10 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:28.993 11:35:10 ftl -- common/autotest_common.sh@10 -- # set +x 00:24:28.993 ************************************ 00:24:28.993 START TEST ftl_restore 00:24:28.993 ************************************ 00:24:28.993 11:35:10 ftl.ftl_restore -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:24:28.993 * Looking for test storage... 00:24:28.993 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:24:28.993 11:35:10 ftl.ftl_restore -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:24:28.993 11:35:10 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # lcov --version 00:24:28.993 11:35:10 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:24:29.252 11:35:10 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:24:29.252 11:35:10 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:29.252 11:35:10 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:29.252 11:35:10 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:29.252 11:35:10 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:24:29.252 11:35:10 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:24:29.252 11:35:10 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:24:29.252 11:35:10 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:24:29.252 11:35:10 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:24:29.252 11:35:10 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:24:29.252 11:35:10 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:24:29.252 11:35:10 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:29.252 11:35:10 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:24:29.252 11:35:10 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:24:29.252 11:35:10 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:29.252 11:35:10 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:29.252 11:35:10 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:24:29.252 11:35:10 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:24:29.252 11:35:10 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:29.252 11:35:10 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:24:29.252 11:35:10 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:24:29.252 11:35:10 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:24:29.252 11:35:10 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:24:29.252 11:35:10 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:29.252 11:35:10 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:24:29.252 11:35:10 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:24:29.252 11:35:10 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:29.252 11:35:10 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:29.252 11:35:10 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:24:29.252 11:35:10 ftl.ftl_restore -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:29.252 11:35:10 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:24:29.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:29.252 --rc genhtml_branch_coverage=1 00:24:29.252 --rc genhtml_function_coverage=1 00:24:29.252 --rc genhtml_legend=1 00:24:29.252 --rc geninfo_all_blocks=1 00:24:29.252 --rc geninfo_unexecuted_blocks=1 00:24:29.252 00:24:29.252 ' 00:24:29.252 11:35:10 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:24:29.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:29.252 --rc genhtml_branch_coverage=1 00:24:29.252 --rc genhtml_function_coverage=1 00:24:29.252 --rc genhtml_legend=1 00:24:29.252 --rc geninfo_all_blocks=1 00:24:29.252 --rc geninfo_unexecuted_blocks=1 00:24:29.252 00:24:29.252 ' 00:24:29.252 11:35:10 ftl.ftl_restore -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:24:29.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:29.252 --rc genhtml_branch_coverage=1 00:24:29.252 --rc genhtml_function_coverage=1 00:24:29.252 --rc genhtml_legend=1 00:24:29.252 --rc geninfo_all_blocks=1 00:24:29.252 --rc geninfo_unexecuted_blocks=1 00:24:29.252 00:24:29.252 ' 00:24:29.252 11:35:10 ftl.ftl_restore -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:24:29.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:29.252 --rc genhtml_branch_coverage=1 00:24:29.252 --rc genhtml_function_coverage=1 00:24:29.252 --rc genhtml_legend=1 00:24:29.252 --rc geninfo_all_blocks=1 00:24:29.252 --rc geninfo_unexecuted_blocks=1 00:24:29.252 00:24:29.252 ' 00:24:29.252 11:35:10 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:24:29.252 11:35:10 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:24:29.252 11:35:10 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:24:29.252 11:35:10 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:24:29.252 11:35:10 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:24:29.252 11:35:10 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:24:29.252 11:35:10 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:29.252 11:35:10 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:24:29.252 11:35:10 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:24:29.252 11:35:10 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:29.252 11:35:10 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:29.252 11:35:10 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:24:29.252 11:35:10 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:24:29.252 11:35:10 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:29.252 11:35:10 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:29.252 11:35:10 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:24:29.252 11:35:10 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:24:29.252 11:35:10 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:29.252 11:35:10 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:29.252 11:35:10 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:24:29.252 11:35:10 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:24:29.252 11:35:10 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:29.252 11:35:10 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:29.253 11:35:10 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:29.253 11:35:10 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:29.253 11:35:10 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:24:29.253 11:35:10 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:24:29.253 11:35:10 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:29.253 11:35:10 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:29.253 11:35:10 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:29.253 11:35:10 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:24:29.253 11:35:10 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.H2pyF0luQq 00:24:29.253 11:35:10 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:24:29.253 11:35:10 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:24:29.253 11:35:10 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:24:29.253 11:35:10 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:24:29.253 11:35:10 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:24:29.253 11:35:10 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:24:29.253 11:35:10 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:24:29.253 11:35:10 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:24:29.253 11:35:10 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=76984 00:24:29.253 11:35:10 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:29.253 11:35:10 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 76984 00:24:29.253 11:35:10 ftl.ftl_restore -- common/autotest_common.sh@831 -- # '[' -z 76984 ']' 00:24:29.253 11:35:10 ftl.ftl_restore -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:29.253 11:35:10 ftl.ftl_restore -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:29.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:29.253 11:35:10 ftl.ftl_restore -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:29.253 11:35:10 ftl.ftl_restore -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:29.253 11:35:10 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:24:29.253 [2024-10-07 11:35:10.947158] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:24:29.253 [2024-10-07 11:35:10.947300] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76984 ] 00:24:29.512 [2024-10-07 11:35:11.120362] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:29.770 [2024-10-07 11:35:11.340128] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:24:30.717 11:35:12 ftl.ftl_restore -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:30.717 11:35:12 ftl.ftl_restore -- common/autotest_common.sh@864 -- # return 0 00:24:30.717 11:35:12 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:24:30.717 11:35:12 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:24:30.717 11:35:12 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:24:30.717 11:35:12 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:24:30.717 11:35:12 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:24:30.717 11:35:12 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:24:30.975 11:35:12 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:24:30.975 11:35:12 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:24:30.975 11:35:12 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:24:30.975 11:35:12 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:24:30.975 11:35:12 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:24:30.975 11:35:12 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:24:30.975 11:35:12 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:24:30.975 11:35:12 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:24:31.234 11:35:12 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:24:31.234 { 00:24:31.235 "name": "nvme0n1", 00:24:31.235 "aliases": [ 00:24:31.235 "5948f410-5e7c-4a33-ad87-cbe1064d2eff" 00:24:31.235 ], 00:24:31.235 "product_name": "NVMe disk", 00:24:31.235 "block_size": 4096, 00:24:31.235 "num_blocks": 1310720, 00:24:31.235 "uuid": "5948f410-5e7c-4a33-ad87-cbe1064d2eff", 00:24:31.235 "numa_id": -1, 00:24:31.235 "assigned_rate_limits": { 00:24:31.235 "rw_ios_per_sec": 0, 00:24:31.235 "rw_mbytes_per_sec": 0, 00:24:31.235 "r_mbytes_per_sec": 0, 00:24:31.235 "w_mbytes_per_sec": 0 00:24:31.235 }, 00:24:31.235 "claimed": true, 00:24:31.235 "claim_type": "read_many_write_one", 00:24:31.235 "zoned": false, 00:24:31.235 "supported_io_types": { 00:24:31.235 "read": true, 00:24:31.235 "write": true, 00:24:31.235 "unmap": true, 00:24:31.235 "flush": true, 00:24:31.235 "reset": true, 00:24:31.235 "nvme_admin": true, 00:24:31.235 "nvme_io": true, 00:24:31.235 "nvme_io_md": false, 00:24:31.235 "write_zeroes": true, 00:24:31.235 "zcopy": false, 00:24:31.235 "get_zone_info": false, 00:24:31.235 "zone_management": false, 00:24:31.235 "zone_append": false, 00:24:31.235 "compare": true, 00:24:31.235 "compare_and_write": false, 00:24:31.235 "abort": true, 00:24:31.235 "seek_hole": false, 00:24:31.235 "seek_data": false, 00:24:31.235 "copy": true, 00:24:31.235 "nvme_iov_md": false 00:24:31.235 }, 00:24:31.235 "driver_specific": { 00:24:31.235 "nvme": [ 00:24:31.235 { 00:24:31.235 "pci_address": "0000:00:11.0", 00:24:31.235 "trid": { 00:24:31.235 "trtype": "PCIe", 00:24:31.235 "traddr": "0000:00:11.0" 00:24:31.235 }, 00:24:31.235 "ctrlr_data": { 00:24:31.235 "cntlid": 0, 00:24:31.235 "vendor_id": "0x1b36", 00:24:31.235 "model_number": "QEMU NVMe Ctrl", 00:24:31.235 "serial_number": "12341", 00:24:31.235 "firmware_revision": "8.0.0", 00:24:31.235 "subnqn": "nqn.2019-08.org.qemu:12341", 00:24:31.235 "oacs": { 00:24:31.235 "security": 0, 00:24:31.235 "format": 1, 00:24:31.235 "firmware": 0, 00:24:31.235 "ns_manage": 1 00:24:31.235 }, 00:24:31.235 "multi_ctrlr": false, 00:24:31.235 "ana_reporting": false 00:24:31.235 }, 00:24:31.235 "vs": { 00:24:31.235 "nvme_version": "1.4" 00:24:31.235 }, 00:24:31.235 "ns_data": { 00:24:31.235 "id": 1, 00:24:31.235 "can_share": false 00:24:31.235 } 00:24:31.235 } 00:24:31.235 ], 00:24:31.235 "mp_policy": "active_passive" 00:24:31.235 } 00:24:31.235 } 00:24:31.235 ]' 00:24:31.235 11:35:12 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:24:31.235 11:35:12 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:24:31.235 11:35:12 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:24:31.235 11:35:12 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=1310720 00:24:31.235 11:35:12 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:24:31.235 11:35:12 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 5120 00:24:31.235 11:35:12 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:24:31.235 11:35:12 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:24:31.235 11:35:12 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:24:31.235 11:35:12 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:24:31.235 11:35:12 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:31.493 11:35:13 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=fcfc1214-167d-46e1-ab9f-0747f577ba37 00:24:31.493 11:35:13 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:24:31.493 11:35:13 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fcfc1214-167d-46e1-ab9f-0747f577ba37 00:24:31.752 11:35:13 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:24:32.011 11:35:13 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=c2ed6afd-cc90-4a32-aff0-0c1138e3e7cb 00:24:32.011 11:35:13 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u c2ed6afd-cc90-4a32-aff0-0c1138e3e7cb 00:24:32.011 11:35:13 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=0fa91b4c-58b8-4d11-94b7-6a31efa2ed27 00:24:32.011 11:35:13 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:24:32.011 11:35:13 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 0fa91b4c-58b8-4d11-94b7-6a31efa2ed27 00:24:32.011 11:35:13 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:24:32.011 11:35:13 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:24:32.011 11:35:13 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=0fa91b4c-58b8-4d11-94b7-6a31efa2ed27 00:24:32.011 11:35:13 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:24:32.011 11:35:13 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 0fa91b4c-58b8-4d11-94b7-6a31efa2ed27 00:24:32.011 11:35:13 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=0fa91b4c-58b8-4d11-94b7-6a31efa2ed27 00:24:32.011 11:35:13 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:24:32.011 11:35:13 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:24:32.011 11:35:13 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:24:32.011 11:35:13 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0fa91b4c-58b8-4d11-94b7-6a31efa2ed27 00:24:32.270 11:35:13 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:24:32.270 { 00:24:32.270 "name": "0fa91b4c-58b8-4d11-94b7-6a31efa2ed27", 00:24:32.270 "aliases": [ 00:24:32.270 "lvs/nvme0n1p0" 00:24:32.270 ], 00:24:32.270 "product_name": "Logical Volume", 00:24:32.270 "block_size": 4096, 00:24:32.270 "num_blocks": 26476544, 00:24:32.270 "uuid": "0fa91b4c-58b8-4d11-94b7-6a31efa2ed27", 00:24:32.270 "assigned_rate_limits": { 00:24:32.270 "rw_ios_per_sec": 0, 00:24:32.270 "rw_mbytes_per_sec": 0, 00:24:32.270 "r_mbytes_per_sec": 0, 00:24:32.270 "w_mbytes_per_sec": 0 00:24:32.270 }, 00:24:32.270 "claimed": false, 00:24:32.270 "zoned": false, 00:24:32.270 "supported_io_types": { 00:24:32.270 "read": true, 00:24:32.270 "write": true, 00:24:32.270 "unmap": true, 00:24:32.270 "flush": false, 00:24:32.270 "reset": true, 00:24:32.270 "nvme_admin": false, 00:24:32.270 "nvme_io": false, 00:24:32.270 "nvme_io_md": false, 00:24:32.270 "write_zeroes": true, 00:24:32.270 "zcopy": false, 00:24:32.270 "get_zone_info": false, 00:24:32.270 "zone_management": false, 00:24:32.270 "zone_append": false, 00:24:32.270 "compare": false, 00:24:32.270 "compare_and_write": false, 00:24:32.270 "abort": false, 00:24:32.270 "seek_hole": true, 00:24:32.270 "seek_data": true, 00:24:32.270 "copy": false, 00:24:32.270 "nvme_iov_md": false 00:24:32.270 }, 00:24:32.270 "driver_specific": { 00:24:32.270 "lvol": { 00:24:32.270 "lvol_store_uuid": "c2ed6afd-cc90-4a32-aff0-0c1138e3e7cb", 00:24:32.270 "base_bdev": "nvme0n1", 00:24:32.270 "thin_provision": true, 00:24:32.270 "num_allocated_clusters": 0, 00:24:32.270 "snapshot": false, 00:24:32.270 "clone": false, 00:24:32.270 "esnap_clone": false 00:24:32.270 } 00:24:32.270 } 00:24:32.270 } 00:24:32.270 ]' 00:24:32.270 11:35:13 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:24:32.270 11:35:13 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:24:32.270 11:35:13 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:24:32.529 11:35:13 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:24:32.529 11:35:13 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:24:32.529 11:35:13 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:24:32.529 11:35:13 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:24:32.529 11:35:13 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:24:32.529 11:35:13 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:24:32.789 11:35:14 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:24:32.789 11:35:14 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:24:32.789 11:35:14 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 0fa91b4c-58b8-4d11-94b7-6a31efa2ed27 00:24:32.789 11:35:14 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=0fa91b4c-58b8-4d11-94b7-6a31efa2ed27 00:24:32.789 11:35:14 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:24:32.789 11:35:14 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:24:32.789 11:35:14 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:24:32.789 11:35:14 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0fa91b4c-58b8-4d11-94b7-6a31efa2ed27 00:24:32.789 11:35:14 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:24:32.789 { 00:24:32.789 "name": "0fa91b4c-58b8-4d11-94b7-6a31efa2ed27", 00:24:32.789 "aliases": [ 00:24:32.789 "lvs/nvme0n1p0" 00:24:32.789 ], 00:24:32.789 "product_name": "Logical Volume", 00:24:32.789 "block_size": 4096, 00:24:32.789 "num_blocks": 26476544, 00:24:32.789 "uuid": "0fa91b4c-58b8-4d11-94b7-6a31efa2ed27", 00:24:32.789 "assigned_rate_limits": { 00:24:32.789 "rw_ios_per_sec": 0, 00:24:32.789 "rw_mbytes_per_sec": 0, 00:24:32.789 "r_mbytes_per_sec": 0, 00:24:32.789 "w_mbytes_per_sec": 0 00:24:32.789 }, 00:24:32.789 "claimed": false, 00:24:32.789 "zoned": false, 00:24:32.789 "supported_io_types": { 00:24:32.789 "read": true, 00:24:32.789 "write": true, 00:24:32.789 "unmap": true, 00:24:32.789 "flush": false, 00:24:32.789 "reset": true, 00:24:32.789 "nvme_admin": false, 00:24:32.789 "nvme_io": false, 00:24:32.789 "nvme_io_md": false, 00:24:32.789 "write_zeroes": true, 00:24:32.789 "zcopy": false, 00:24:32.789 "get_zone_info": false, 00:24:32.790 "zone_management": false, 00:24:32.790 "zone_append": false, 00:24:32.790 "compare": false, 00:24:32.790 "compare_and_write": false, 00:24:32.790 "abort": false, 00:24:32.790 "seek_hole": true, 00:24:32.790 "seek_data": true, 00:24:32.790 "copy": false, 00:24:32.790 "nvme_iov_md": false 00:24:32.790 }, 00:24:32.790 "driver_specific": { 00:24:32.790 "lvol": { 00:24:32.790 "lvol_store_uuid": "c2ed6afd-cc90-4a32-aff0-0c1138e3e7cb", 00:24:32.790 "base_bdev": "nvme0n1", 00:24:32.790 "thin_provision": true, 00:24:32.790 "num_allocated_clusters": 0, 00:24:32.790 "snapshot": false, 00:24:32.790 "clone": false, 00:24:32.790 "esnap_clone": false 00:24:32.790 } 00:24:32.790 } 00:24:32.790 } 00:24:32.790 ]' 00:24:32.790 11:35:14 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:24:33.049 11:35:14 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:24:33.049 11:35:14 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:24:33.049 11:35:14 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:24:33.049 11:35:14 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:24:33.049 11:35:14 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:24:33.049 11:35:14 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:24:33.049 11:35:14 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:24:33.308 11:35:14 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:24:33.308 11:35:14 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 0fa91b4c-58b8-4d11-94b7-6a31efa2ed27 00:24:33.308 11:35:14 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=0fa91b4c-58b8-4d11-94b7-6a31efa2ed27 00:24:33.308 11:35:14 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:24:33.308 11:35:14 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:24:33.308 11:35:14 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:24:33.308 11:35:14 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0fa91b4c-58b8-4d11-94b7-6a31efa2ed27 00:24:33.308 11:35:14 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:24:33.308 { 00:24:33.308 "name": "0fa91b4c-58b8-4d11-94b7-6a31efa2ed27", 00:24:33.308 "aliases": [ 00:24:33.308 "lvs/nvme0n1p0" 00:24:33.308 ], 00:24:33.308 "product_name": "Logical Volume", 00:24:33.308 "block_size": 4096, 00:24:33.308 "num_blocks": 26476544, 00:24:33.308 "uuid": "0fa91b4c-58b8-4d11-94b7-6a31efa2ed27", 00:24:33.308 "assigned_rate_limits": { 00:24:33.308 "rw_ios_per_sec": 0, 00:24:33.308 "rw_mbytes_per_sec": 0, 00:24:33.308 "r_mbytes_per_sec": 0, 00:24:33.308 "w_mbytes_per_sec": 0 00:24:33.308 }, 00:24:33.308 "claimed": false, 00:24:33.308 "zoned": false, 00:24:33.308 "supported_io_types": { 00:24:33.308 "read": true, 00:24:33.308 "write": true, 00:24:33.308 "unmap": true, 00:24:33.308 "flush": false, 00:24:33.308 "reset": true, 00:24:33.308 "nvme_admin": false, 00:24:33.308 "nvme_io": false, 00:24:33.308 "nvme_io_md": false, 00:24:33.308 "write_zeroes": true, 00:24:33.308 "zcopy": false, 00:24:33.308 "get_zone_info": false, 00:24:33.308 "zone_management": false, 00:24:33.308 "zone_append": false, 00:24:33.308 "compare": false, 00:24:33.308 "compare_and_write": false, 00:24:33.308 "abort": false, 00:24:33.308 "seek_hole": true, 00:24:33.308 "seek_data": true, 00:24:33.308 "copy": false, 00:24:33.308 "nvme_iov_md": false 00:24:33.308 }, 00:24:33.308 "driver_specific": { 00:24:33.308 "lvol": { 00:24:33.308 "lvol_store_uuid": "c2ed6afd-cc90-4a32-aff0-0c1138e3e7cb", 00:24:33.308 "base_bdev": "nvme0n1", 00:24:33.308 "thin_provision": true, 00:24:33.308 "num_allocated_clusters": 0, 00:24:33.308 "snapshot": false, 00:24:33.308 "clone": false, 00:24:33.308 "esnap_clone": false 00:24:33.308 } 00:24:33.308 } 00:24:33.308 } 00:24:33.308 ]' 00:24:33.308 11:35:14 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:24:33.569 11:35:15 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:24:33.569 11:35:15 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:24:33.569 11:35:15 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:24:33.569 11:35:15 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:24:33.569 11:35:15 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:24:33.569 11:35:15 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:24:33.569 11:35:15 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 0fa91b4c-58b8-4d11-94b7-6a31efa2ed27 --l2p_dram_limit 10' 00:24:33.569 11:35:15 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:24:33.569 11:35:15 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:24:33.569 11:35:15 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:24:33.569 11:35:15 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:24:33.569 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:24:33.569 11:35:15 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 0fa91b4c-58b8-4d11-94b7-6a31efa2ed27 --l2p_dram_limit 10 -c nvc0n1p0 00:24:33.569 [2024-10-07 11:35:15.246645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.569 [2024-10-07 11:35:15.246905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:33.569 [2024-10-07 11:35:15.247052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:33.569 [2024-10-07 11:35:15.247092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.569 [2024-10-07 11:35:15.247207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.569 [2024-10-07 11:35:15.247282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:33.569 [2024-10-07 11:35:15.247323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:24:33.569 [2024-10-07 11:35:15.247354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.569 [2024-10-07 11:35:15.247468] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:33.569 [2024-10-07 11:35:15.248533] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:33.569 [2024-10-07 11:35:15.248681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.569 [2024-10-07 11:35:15.248761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:33.569 [2024-10-07 11:35:15.248803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.231 ms 00:24:33.569 [2024-10-07 11:35:15.248836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.569 [2024-10-07 11:35:15.249144] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID b8292fed-22fb-4cec-9b97-a2c299f43dd8 00:24:33.569 [2024-10-07 11:35:15.250647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.569 [2024-10-07 11:35:15.250789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:24:33.569 [2024-10-07 11:35:15.250869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:24:33.569 [2024-10-07 11:35:15.250908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.569 [2024-10-07 11:35:15.258474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.569 [2024-10-07 11:35:15.258632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:33.569 [2024-10-07 11:35:15.258749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.407 ms 00:24:33.569 [2024-10-07 11:35:15.258794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.569 [2024-10-07 11:35:15.258923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.569 [2024-10-07 11:35:15.258962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:33.569 [2024-10-07 11:35:15.259051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:24:33.569 [2024-10-07 11:35:15.259095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.569 [2024-10-07 11:35:15.259194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.569 [2024-10-07 11:35:15.259233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:33.569 [2024-10-07 11:35:15.259264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:33.569 [2024-10-07 11:35:15.259392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.569 [2024-10-07 11:35:15.259441] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:33.569 [2024-10-07 11:35:15.264777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.569 [2024-10-07 11:35:15.264894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:33.569 [2024-10-07 11:35:15.265037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.349 ms 00:24:33.569 [2024-10-07 11:35:15.265073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.569 [2024-10-07 11:35:15.265136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.569 [2024-10-07 11:35:15.265203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:33.569 [2024-10-07 11:35:15.265243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:33.569 [2024-10-07 11:35:15.265276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.569 [2024-10-07 11:35:15.265395] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:24:33.569 [2024-10-07 11:35:15.265551] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:33.569 [2024-10-07 11:35:15.265761] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:33.569 [2024-10-07 11:35:15.265820] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:33.569 [2024-10-07 11:35:15.265879] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:33.569 [2024-10-07 11:35:15.265928] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:33.569 [2024-10-07 11:35:15.266039] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:33.569 [2024-10-07 11:35:15.266071] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:33.569 [2024-10-07 11:35:15.266103] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:33.569 [2024-10-07 11:35:15.266133] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:33.569 [2024-10-07 11:35:15.266168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.569 [2024-10-07 11:35:15.266254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:33.569 [2024-10-07 11:35:15.266275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.775 ms 00:24:33.569 [2024-10-07 11:35:15.266296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.569 [2024-10-07 11:35:15.266381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.569 [2024-10-07 11:35:15.266395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:33.569 [2024-10-07 11:35:15.266408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:24:33.569 [2024-10-07 11:35:15.266418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.569 [2024-10-07 11:35:15.266510] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:33.569 [2024-10-07 11:35:15.266522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:33.569 [2024-10-07 11:35:15.266536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:33.569 [2024-10-07 11:35:15.266546] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:33.569 [2024-10-07 11:35:15.266559] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:33.569 [2024-10-07 11:35:15.266569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:33.569 [2024-10-07 11:35:15.266581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:33.569 [2024-10-07 11:35:15.266590] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:33.569 [2024-10-07 11:35:15.266601] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:33.569 [2024-10-07 11:35:15.266610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:33.569 [2024-10-07 11:35:15.266622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:33.569 [2024-10-07 11:35:15.266631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:33.570 [2024-10-07 11:35:15.266643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:33.570 [2024-10-07 11:35:15.266652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:33.570 [2024-10-07 11:35:15.266663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:33.570 [2024-10-07 11:35:15.266672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:33.570 [2024-10-07 11:35:15.266686] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:33.570 [2024-10-07 11:35:15.266695] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:33.570 [2024-10-07 11:35:15.266706] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:33.570 [2024-10-07 11:35:15.266715] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:33.570 [2024-10-07 11:35:15.266728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:33.570 [2024-10-07 11:35:15.266869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:33.570 [2024-10-07 11:35:15.266922] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:33.570 [2024-10-07 11:35:15.266954] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:33.570 [2024-10-07 11:35:15.266985] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:33.570 [2024-10-07 11:35:15.267015] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:33.570 [2024-10-07 11:35:15.267103] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:33.570 [2024-10-07 11:35:15.267138] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:33.570 [2024-10-07 11:35:15.267171] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:33.570 [2024-10-07 11:35:15.267200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:33.570 [2024-10-07 11:35:15.267232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:33.570 [2024-10-07 11:35:15.267300] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:33.570 [2024-10-07 11:35:15.267342] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:33.570 [2024-10-07 11:35:15.267413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:33.570 [2024-10-07 11:35:15.267450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:33.570 [2024-10-07 11:35:15.267462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:33.570 [2024-10-07 11:35:15.267474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:33.570 [2024-10-07 11:35:15.267484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:33.570 [2024-10-07 11:35:15.267495] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:33.570 [2024-10-07 11:35:15.267505] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:33.570 [2024-10-07 11:35:15.267516] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:33.570 [2024-10-07 11:35:15.267525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:33.570 [2024-10-07 11:35:15.267536] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:33.570 [2024-10-07 11:35:15.267545] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:33.570 [2024-10-07 11:35:15.267558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:33.570 [2024-10-07 11:35:15.267571] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:33.570 [2024-10-07 11:35:15.267585] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:33.570 [2024-10-07 11:35:15.267595] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:33.570 [2024-10-07 11:35:15.267610] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:33.570 [2024-10-07 11:35:15.267619] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:33.570 [2024-10-07 11:35:15.267631] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:33.570 [2024-10-07 11:35:15.267640] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:33.570 [2024-10-07 11:35:15.267652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:33.570 [2024-10-07 11:35:15.267666] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:33.570 [2024-10-07 11:35:15.267683] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:33.570 [2024-10-07 11:35:15.267694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:33.570 [2024-10-07 11:35:15.267707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:33.570 [2024-10-07 11:35:15.267718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:33.570 [2024-10-07 11:35:15.267732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:33.570 [2024-10-07 11:35:15.267753] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:33.570 [2024-10-07 11:35:15.267766] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:33.570 [2024-10-07 11:35:15.267777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:33.570 [2024-10-07 11:35:15.267790] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:33.570 [2024-10-07 11:35:15.267800] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:33.570 [2024-10-07 11:35:15.267815] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:33.570 [2024-10-07 11:35:15.267825] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:33.570 [2024-10-07 11:35:15.267839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:33.570 [2024-10-07 11:35:15.267849] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:33.570 [2024-10-07 11:35:15.267862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:33.570 [2024-10-07 11:35:15.267872] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:33.570 [2024-10-07 11:35:15.267887] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:33.570 [2024-10-07 11:35:15.267898] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:33.570 [2024-10-07 11:35:15.267911] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:33.570 [2024-10-07 11:35:15.267921] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:33.570 [2024-10-07 11:35:15.267933] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:33.570 [2024-10-07 11:35:15.267946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.570 [2024-10-07 11:35:15.267959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:33.570 [2024-10-07 11:35:15.267970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.497 ms 00:24:33.570 [2024-10-07 11:35:15.267982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.570 [2024-10-07 11:35:15.268034] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:24:33.570 [2024-10-07 11:35:15.268052] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:24:36.854 [2024-10-07 11:35:18.475456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.854 [2024-10-07 11:35:18.475521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:24:36.854 [2024-10-07 11:35:18.475538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3212.628 ms 00:24:36.854 [2024-10-07 11:35:18.475552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.854 [2024-10-07 11:35:18.514950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.854 [2024-10-07 11:35:18.515010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:36.854 [2024-10-07 11:35:18.515027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.092 ms 00:24:36.854 [2024-10-07 11:35:18.515041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.854 [2024-10-07 11:35:18.515202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.854 [2024-10-07 11:35:18.515219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:36.854 [2024-10-07 11:35:18.515230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:24:36.854 [2024-10-07 11:35:18.515246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.112 [2024-10-07 11:35:18.572776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.112 [2024-10-07 11:35:18.572829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:37.112 [2024-10-07 11:35:18.572848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.573 ms 00:24:37.112 [2024-10-07 11:35:18.572863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.112 [2024-10-07 11:35:18.572916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.112 [2024-10-07 11:35:18.572929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:37.112 [2024-10-07 11:35:18.572941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:37.112 [2024-10-07 11:35:18.572966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.112 [2024-10-07 11:35:18.573479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.112 [2024-10-07 11:35:18.573496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:37.112 [2024-10-07 11:35:18.573507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.422 ms 00:24:37.112 [2024-10-07 11:35:18.573523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.112 [2024-10-07 11:35:18.573629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.112 [2024-10-07 11:35:18.573642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:37.112 [2024-10-07 11:35:18.573653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:24:37.112 [2024-10-07 11:35:18.573668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.112 [2024-10-07 11:35:18.594863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.112 [2024-10-07 11:35:18.594915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:37.112 [2024-10-07 11:35:18.594932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.205 ms 00:24:37.112 [2024-10-07 11:35:18.594945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.112 [2024-10-07 11:35:18.608254] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:37.112 [2024-10-07 11:35:18.611620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.112 [2024-10-07 11:35:18.611652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:37.112 [2024-10-07 11:35:18.611669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.576 ms 00:24:37.112 [2024-10-07 11:35:18.611683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.112 [2024-10-07 11:35:18.692191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.112 [2024-10-07 11:35:18.692257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:24:37.112 [2024-10-07 11:35:18.692282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 80.583 ms 00:24:37.112 [2024-10-07 11:35:18.692293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.112 [2024-10-07 11:35:18.692492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.112 [2024-10-07 11:35:18.692506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:37.112 [2024-10-07 11:35:18.692523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:24:37.112 [2024-10-07 11:35:18.692533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.112 [2024-10-07 11:35:18.729960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.112 [2024-10-07 11:35:18.730011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:24:37.112 [2024-10-07 11:35:18.730029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.427 ms 00:24:37.112 [2024-10-07 11:35:18.730041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.112 [2024-10-07 11:35:18.766463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.112 [2024-10-07 11:35:18.766619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:24:37.112 [2024-10-07 11:35:18.766647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.433 ms 00:24:37.112 [2024-10-07 11:35:18.766657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.112 [2024-10-07 11:35:18.767326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.112 [2024-10-07 11:35:18.767348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:37.112 [2024-10-07 11:35:18.767363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.628 ms 00:24:37.112 [2024-10-07 11:35:18.767373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.372 [2024-10-07 11:35:18.866397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.372 [2024-10-07 11:35:18.866455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:24:37.372 [2024-10-07 11:35:18.866479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 99.125 ms 00:24:37.372 [2024-10-07 11:35:18.866494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.372 [2024-10-07 11:35:18.906104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.372 [2024-10-07 11:35:18.906156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:24:37.372 [2024-10-07 11:35:18.906176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.582 ms 00:24:37.372 [2024-10-07 11:35:18.906187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.372 [2024-10-07 11:35:18.943529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.372 [2024-10-07 11:35:18.943579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:24:37.372 [2024-10-07 11:35:18.943597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.354 ms 00:24:37.372 [2024-10-07 11:35:18.943607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.372 [2024-10-07 11:35:18.981282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.372 [2024-10-07 11:35:18.981326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:37.372 [2024-10-07 11:35:18.981344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.686 ms 00:24:37.372 [2024-10-07 11:35:18.981354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.372 [2024-10-07 11:35:18.981401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.372 [2024-10-07 11:35:18.981414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:37.372 [2024-10-07 11:35:18.981430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:37.372 [2024-10-07 11:35:18.981445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.372 [2024-10-07 11:35:18.981564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.372 [2024-10-07 11:35:18.981577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:37.372 [2024-10-07 11:35:18.981592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:24:37.372 [2024-10-07 11:35:18.981602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.372 [2024-10-07 11:35:18.982708] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3741.688 ms, result 0 00:24:37.372 { 00:24:37.372 "name": "ftl0", 00:24:37.372 "uuid": "b8292fed-22fb-4cec-9b97-a2c299f43dd8" 00:24:37.372 } 00:24:37.372 11:35:19 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:24:37.372 11:35:19 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:24:37.630 11:35:19 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:24:37.630 11:35:19 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:24:37.894 [2024-10-07 11:35:19.421279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.894 [2024-10-07 11:35:19.421338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:37.894 [2024-10-07 11:35:19.421354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:37.894 [2024-10-07 11:35:19.421367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.894 [2024-10-07 11:35:19.421395] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:37.894 [2024-10-07 11:35:19.425791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.894 [2024-10-07 11:35:19.425823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:37.894 [2024-10-07 11:35:19.425856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.380 ms 00:24:37.894 [2024-10-07 11:35:19.425867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.894 [2024-10-07 11:35:19.426119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.894 [2024-10-07 11:35:19.426133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:37.894 [2024-10-07 11:35:19.426146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.218 ms 00:24:37.894 [2024-10-07 11:35:19.426157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.894 [2024-10-07 11:35:19.428685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.894 [2024-10-07 11:35:19.428707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:37.894 [2024-10-07 11:35:19.428723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.513 ms 00:24:37.894 [2024-10-07 11:35:19.428737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.894 [2024-10-07 11:35:19.433773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.894 [2024-10-07 11:35:19.433915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:37.894 [2024-10-07 11:35:19.433942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.997 ms 00:24:37.894 [2024-10-07 11:35:19.433952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.894 [2024-10-07 11:35:19.470662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.894 [2024-10-07 11:35:19.470701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:37.894 [2024-10-07 11:35:19.470719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.690 ms 00:24:37.894 [2024-10-07 11:35:19.470729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.894 [2024-10-07 11:35:19.492298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.894 [2024-10-07 11:35:19.492337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:37.894 [2024-10-07 11:35:19.492355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.543 ms 00:24:37.894 [2024-10-07 11:35:19.492365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.894 [2024-10-07 11:35:19.492536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.894 [2024-10-07 11:35:19.492553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:37.894 [2024-10-07 11:35:19.492567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:24:37.894 [2024-10-07 11:35:19.492577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.894 [2024-10-07 11:35:19.529508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.894 [2024-10-07 11:35:19.529547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:37.894 [2024-10-07 11:35:19.529565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.966 ms 00:24:37.894 [2024-10-07 11:35:19.529575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.894 [2024-10-07 11:35:19.566450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.894 [2024-10-07 11:35:19.566490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:37.894 [2024-10-07 11:35:19.566507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.887 ms 00:24:37.894 [2024-10-07 11:35:19.566517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.894 [2024-10-07 11:35:19.602858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.894 [2024-10-07 11:35:19.603036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:37.894 [2024-10-07 11:35:19.603064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.346 ms 00:24:37.894 [2024-10-07 11:35:19.603074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.154 [2024-10-07 11:35:19.639840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:38.154 [2024-10-07 11:35:19.639888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:38.154 [2024-10-07 11:35:19.639907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.674 ms 00:24:38.154 [2024-10-07 11:35:19.639917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.154 [2024-10-07 11:35:19.639964] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:38.154 [2024-10-07 11:35:19.639984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:38.154 [2024-10-07 11:35:19.640001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:38.154 [2024-10-07 11:35:19.640013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:38.154 [2024-10-07 11:35:19.640026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:38.154 [2024-10-07 11:35:19.640037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:38.154 [2024-10-07 11:35:19.640051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:38.154 [2024-10-07 11:35:19.640062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:38.154 [2024-10-07 11:35:19.640079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:38.154 [2024-10-07 11:35:19.640090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:38.154 [2024-10-07 11:35:19.640103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:38.154 [2024-10-07 11:35:19.640114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:38.154 [2024-10-07 11:35:19.640128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:38.154 [2024-10-07 11:35:19.640139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:38.154 [2024-10-07 11:35:19.640152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:38.154 [2024-10-07 11:35:19.640163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:38.154 [2024-10-07 11:35:19.640176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:38.154 [2024-10-07 11:35:19.640187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:38.154 [2024-10-07 11:35:19.640201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:38.154 [2024-10-07 11:35:19.640212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:38.154 [2024-10-07 11:35:19.640228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:38.154 [2024-10-07 11:35:19.640238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:38.154 [2024-10-07 11:35:19.640251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:38.154 [2024-10-07 11:35:19.640262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:38.154 [2024-10-07 11:35:19.640278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:38.154 [2024-10-07 11:35:19.640288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:38.154 [2024-10-07 11:35:19.640302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:38.154 [2024-10-07 11:35:19.640315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:38.154 [2024-10-07 11:35:19.640328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:38.154 [2024-10-07 11:35:19.640340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:38.154 [2024-10-07 11:35:19.640353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:38.154 [2024-10-07 11:35:19.640364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:38.154 [2024-10-07 11:35:19.640377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:38.154 [2024-10-07 11:35:19.640388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:38.154 [2024-10-07 11:35:19.640402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:38.154 [2024-10-07 11:35:19.640412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:38.154 [2024-10-07 11:35:19.640425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:38.154 [2024-10-07 11:35:19.640436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:38.154 [2024-10-07 11:35:19.640449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:38.154 [2024-10-07 11:35:19.640460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:38.154 [2024-10-07 11:35:19.640475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:38.154 [2024-10-07 11:35:19.640486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:38.154 [2024-10-07 11:35:19.640499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:38.154 [2024-10-07 11:35:19.640510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:38.154 [2024-10-07 11:35:19.640523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:38.154 [2024-10-07 11:35:19.640534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:38.154 [2024-10-07 11:35:19.640549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:38.154 [2024-10-07 11:35:19.640559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:38.154 [2024-10-07 11:35:19.640573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:38.154 [2024-10-07 11:35:19.640583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:38.154 [2024-10-07 11:35:19.640596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:38.154 [2024-10-07 11:35:19.640607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:38.154 [2024-10-07 11:35:19.640620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:38.154 [2024-10-07 11:35:19.640631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:38.154 [2024-10-07 11:35:19.640644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:38.154 [2024-10-07 11:35:19.640654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:38.154 [2024-10-07 11:35:19.640670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:38.154 [2024-10-07 11:35:19.640681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:38.154 [2024-10-07 11:35:19.640694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:38.154 [2024-10-07 11:35:19.640706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:38.154 [2024-10-07 11:35:19.640719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:38.154 [2024-10-07 11:35:19.640730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:38.154 [2024-10-07 11:35:19.640771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:38.154 [2024-10-07 11:35:19.640782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:38.154 [2024-10-07 11:35:19.640796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:38.154 [2024-10-07 11:35:19.640807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:38.154 [2024-10-07 11:35:19.640820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:38.155 [2024-10-07 11:35:19.640830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:38.155 [2024-10-07 11:35:19.640843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:38.155 [2024-10-07 11:35:19.640854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:38.155 [2024-10-07 11:35:19.640868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:38.155 [2024-10-07 11:35:19.640879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:38.155 [2024-10-07 11:35:19.640916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:38.155 [2024-10-07 11:35:19.640927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:38.155 [2024-10-07 11:35:19.640940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:38.155 [2024-10-07 11:35:19.640952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:38.155 [2024-10-07 11:35:19.640965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:38.155 [2024-10-07 11:35:19.640976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:38.155 [2024-10-07 11:35:19.640989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:38.155 [2024-10-07 11:35:19.641000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:38.155 [2024-10-07 11:35:19.641014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:38.155 [2024-10-07 11:35:19.641024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:38.155 [2024-10-07 11:35:19.641037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:38.155 [2024-10-07 11:35:19.641048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:38.155 [2024-10-07 11:35:19.641061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:38.155 [2024-10-07 11:35:19.641071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:38.155 [2024-10-07 11:35:19.641084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:38.155 [2024-10-07 11:35:19.641094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:38.155 [2024-10-07 11:35:19.641110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:38.155 [2024-10-07 11:35:19.641121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:38.155 [2024-10-07 11:35:19.641134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:38.155 [2024-10-07 11:35:19.641145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:38.155 [2024-10-07 11:35:19.641159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:38.155 [2024-10-07 11:35:19.641169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:38.155 [2024-10-07 11:35:19.641182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:38.155 [2024-10-07 11:35:19.641194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:38.155 [2024-10-07 11:35:19.641207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:38.155 [2024-10-07 11:35:19.641218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:38.155 [2024-10-07 11:35:19.641232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:38.155 [2024-10-07 11:35:19.641243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:38.155 [2024-10-07 11:35:19.641256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:38.155 [2024-10-07 11:35:19.641275] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:38.155 [2024-10-07 11:35:19.641287] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b8292fed-22fb-4cec-9b97-a2c299f43dd8 00:24:38.155 [2024-10-07 11:35:19.641302] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:38.155 [2024-10-07 11:35:19.641317] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:38.155 [2024-10-07 11:35:19.641327] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:38.155 [2024-10-07 11:35:19.641340] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:38.155 [2024-10-07 11:35:19.641350] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:38.155 [2024-10-07 11:35:19.641362] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:38.155 [2024-10-07 11:35:19.641376] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:38.155 [2024-10-07 11:35:19.641387] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:38.155 [2024-10-07 11:35:19.641396] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:38.155 [2024-10-07 11:35:19.641408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:38.155 [2024-10-07 11:35:19.641418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:38.155 [2024-10-07 11:35:19.641432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.449 ms 00:24:38.155 [2024-10-07 11:35:19.641443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.155 [2024-10-07 11:35:19.661627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:38.155 [2024-10-07 11:35:19.661668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:38.155 [2024-10-07 11:35:19.661685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.156 ms 00:24:38.155 [2024-10-07 11:35:19.661696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.155 [2024-10-07 11:35:19.662213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:38.155 [2024-10-07 11:35:19.662231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:38.155 [2024-10-07 11:35:19.662246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.480 ms 00:24:38.155 [2024-10-07 11:35:19.662256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.155 [2024-10-07 11:35:19.721234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:38.155 [2024-10-07 11:35:19.721283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:38.155 [2024-10-07 11:35:19.721301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:38.155 [2024-10-07 11:35:19.721312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.155 [2024-10-07 11:35:19.721394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:38.155 [2024-10-07 11:35:19.721405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:38.155 [2024-10-07 11:35:19.721418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:38.155 [2024-10-07 11:35:19.721428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.155 [2024-10-07 11:35:19.721547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:38.155 [2024-10-07 11:35:19.721561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:38.155 [2024-10-07 11:35:19.721574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:38.155 [2024-10-07 11:35:19.721584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.155 [2024-10-07 11:35:19.721613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:38.155 [2024-10-07 11:35:19.721624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:38.155 [2024-10-07 11:35:19.721636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:38.155 [2024-10-07 11:35:19.721647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.155 [2024-10-07 11:35:19.848460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:38.155 [2024-10-07 11:35:19.848520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:38.155 [2024-10-07 11:35:19.848539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:38.155 [2024-10-07 11:35:19.848550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.414 [2024-10-07 11:35:19.951637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:38.414 [2024-10-07 11:35:19.951859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:38.414 [2024-10-07 11:35:19.951890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:38.414 [2024-10-07 11:35:19.951901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.414 [2024-10-07 11:35:19.952029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:38.414 [2024-10-07 11:35:19.952043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:38.414 [2024-10-07 11:35:19.952056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:38.414 [2024-10-07 11:35:19.952067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.414 [2024-10-07 11:35:19.952131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:38.414 [2024-10-07 11:35:19.952147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:38.414 [2024-10-07 11:35:19.952160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:38.414 [2024-10-07 11:35:19.952170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.414 [2024-10-07 11:35:19.952292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:38.414 [2024-10-07 11:35:19.952306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:38.414 [2024-10-07 11:35:19.952320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:38.414 [2024-10-07 11:35:19.952330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.414 [2024-10-07 11:35:19.952373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:38.414 [2024-10-07 11:35:19.952385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:38.414 [2024-10-07 11:35:19.952401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:38.414 [2024-10-07 11:35:19.952412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.414 [2024-10-07 11:35:19.952456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:38.414 [2024-10-07 11:35:19.952467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:38.414 [2024-10-07 11:35:19.952480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:38.414 [2024-10-07 11:35:19.952490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.414 [2024-10-07 11:35:19.952540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:38.414 [2024-10-07 11:35:19.952555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:38.414 [2024-10-07 11:35:19.952568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:38.414 [2024-10-07 11:35:19.952578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.414 [2024-10-07 11:35:19.952709] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 532.262 ms, result 0 00:24:38.414 true 00:24:38.414 11:35:19 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 76984 00:24:38.414 11:35:19 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 76984 ']' 00:24:38.414 11:35:19 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 76984 00:24:38.414 11:35:19 ftl.ftl_restore -- common/autotest_common.sh@955 -- # uname 00:24:38.414 11:35:19 ftl.ftl_restore -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:38.414 11:35:19 ftl.ftl_restore -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76984 00:24:38.414 11:35:20 ftl.ftl_restore -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:38.414 killing process with pid 76984 00:24:38.414 11:35:20 ftl.ftl_restore -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:38.414 11:35:20 ftl.ftl_restore -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76984' 00:24:38.414 11:35:20 ftl.ftl_restore -- common/autotest_common.sh@969 -- # kill 76984 00:24:38.414 11:35:20 ftl.ftl_restore -- common/autotest_common.sh@974 -- # wait 76984 00:24:44.411 11:35:25 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:24:48.600 262144+0 records in 00:24:48.600 262144+0 records out 00:24:48.600 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.22642 s, 254 MB/s 00:24:48.600 11:35:29 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:24:49.977 11:35:31 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:49.977 [2024-10-07 11:35:31.531579] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:24:49.977 [2024-10-07 11:35:31.531712] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77231 ] 00:24:50.236 [2024-10-07 11:35:31.711454] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:50.236 [2024-10-07 11:35:31.913820] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:24:50.805 [2024-10-07 11:35:32.300644] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:50.805 [2024-10-07 11:35:32.300708] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:50.805 [2024-10-07 11:35:32.470529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.805 [2024-10-07 11:35:32.470735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:50.805 [2024-10-07 11:35:32.470771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:50.805 [2024-10-07 11:35:32.470782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.805 [2024-10-07 11:35:32.470857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.805 [2024-10-07 11:35:32.470870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:50.805 [2024-10-07 11:35:32.470881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:24:50.805 [2024-10-07 11:35:32.470891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.806 [2024-10-07 11:35:32.470913] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:50.806 [2024-10-07 11:35:32.471905] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:50.806 [2024-10-07 11:35:32.471928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.806 [2024-10-07 11:35:32.471939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:50.806 [2024-10-07 11:35:32.471950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.022 ms 00:24:50.806 [2024-10-07 11:35:32.471961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.806 [2024-10-07 11:35:32.473412] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:50.806 [2024-10-07 11:35:32.492477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.806 [2024-10-07 11:35:32.492524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:50.806 [2024-10-07 11:35:32.492538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.097 ms 00:24:50.806 [2024-10-07 11:35:32.492549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.806 [2024-10-07 11:35:32.492615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.806 [2024-10-07 11:35:32.492628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:50.806 [2024-10-07 11:35:32.492639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:24:50.806 [2024-10-07 11:35:32.492650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.806 [2024-10-07 11:35:32.499365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.806 [2024-10-07 11:35:32.499393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:50.806 [2024-10-07 11:35:32.499405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.652 ms 00:24:50.806 [2024-10-07 11:35:32.499415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.806 [2024-10-07 11:35:32.499518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.806 [2024-10-07 11:35:32.499532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:50.806 [2024-10-07 11:35:32.499543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:24:50.806 [2024-10-07 11:35:32.499553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.806 [2024-10-07 11:35:32.499599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.806 [2024-10-07 11:35:32.499612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:50.806 [2024-10-07 11:35:32.499622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:24:50.806 [2024-10-07 11:35:32.499632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.806 [2024-10-07 11:35:32.499657] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:50.806 [2024-10-07 11:35:32.504513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.806 [2024-10-07 11:35:32.504544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:50.806 [2024-10-07 11:35:32.504556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.869 ms 00:24:50.806 [2024-10-07 11:35:32.504566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.806 [2024-10-07 11:35:32.504596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.806 [2024-10-07 11:35:32.504607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:50.806 [2024-10-07 11:35:32.504618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:50.806 [2024-10-07 11:35:32.504628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.806 [2024-10-07 11:35:32.504686] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:50.806 [2024-10-07 11:35:32.504712] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:50.806 [2024-10-07 11:35:32.504762] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:50.806 [2024-10-07 11:35:32.504782] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:50.806 [2024-10-07 11:35:32.504870] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:50.806 [2024-10-07 11:35:32.504883] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:50.806 [2024-10-07 11:35:32.504897] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:50.806 [2024-10-07 11:35:32.504917] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:50.806 [2024-10-07 11:35:32.504928] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:50.806 [2024-10-07 11:35:32.504939] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:50.806 [2024-10-07 11:35:32.504949] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:50.806 [2024-10-07 11:35:32.504959] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:50.806 [2024-10-07 11:35:32.504969] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:50.806 [2024-10-07 11:35:32.504980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.806 [2024-10-07 11:35:32.504990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:50.806 [2024-10-07 11:35:32.505000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:24:50.806 [2024-10-07 11:35:32.505010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.806 [2024-10-07 11:35:32.505081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.806 [2024-10-07 11:35:32.505098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:50.806 [2024-10-07 11:35:32.505109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:24:50.806 [2024-10-07 11:35:32.505118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.806 [2024-10-07 11:35:32.505216] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:50.806 [2024-10-07 11:35:32.505232] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:50.806 [2024-10-07 11:35:32.505243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:50.806 [2024-10-07 11:35:32.505253] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:50.806 [2024-10-07 11:35:32.505263] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:50.806 [2024-10-07 11:35:32.505272] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:50.806 [2024-10-07 11:35:32.505282] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:50.806 [2024-10-07 11:35:32.505293] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:50.806 [2024-10-07 11:35:32.505303] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:50.806 [2024-10-07 11:35:32.505312] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:50.806 [2024-10-07 11:35:32.505323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:50.806 [2024-10-07 11:35:32.505333] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:50.806 [2024-10-07 11:35:32.505342] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:50.806 [2024-10-07 11:35:32.505363] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:50.806 [2024-10-07 11:35:32.505373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:50.806 [2024-10-07 11:35:32.505382] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:50.806 [2024-10-07 11:35:32.505392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:50.806 [2024-10-07 11:35:32.505402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:50.806 [2024-10-07 11:35:32.505411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:50.807 [2024-10-07 11:35:32.505421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:50.807 [2024-10-07 11:35:32.505430] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:50.807 [2024-10-07 11:35:32.505439] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:50.807 [2024-10-07 11:35:32.505448] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:50.807 [2024-10-07 11:35:32.505458] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:50.807 [2024-10-07 11:35:32.505467] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:50.807 [2024-10-07 11:35:32.505476] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:50.807 [2024-10-07 11:35:32.505486] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:50.807 [2024-10-07 11:35:32.505495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:50.807 [2024-10-07 11:35:32.505504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:50.807 [2024-10-07 11:35:32.505513] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:50.807 [2024-10-07 11:35:32.505523] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:50.807 [2024-10-07 11:35:32.505531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:50.807 [2024-10-07 11:35:32.505540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:50.807 [2024-10-07 11:35:32.505549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:50.807 [2024-10-07 11:35:32.505558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:50.807 [2024-10-07 11:35:32.505567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:50.807 [2024-10-07 11:35:32.505576] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:50.807 [2024-10-07 11:35:32.505585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:50.807 [2024-10-07 11:35:32.505594] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:50.807 [2024-10-07 11:35:32.505603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:50.807 [2024-10-07 11:35:32.505612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:50.807 [2024-10-07 11:35:32.505621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:50.807 [2024-10-07 11:35:32.505631] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:50.807 [2024-10-07 11:35:32.505640] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:50.807 [2024-10-07 11:35:32.505650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:50.807 [2024-10-07 11:35:32.505666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:50.807 [2024-10-07 11:35:32.505675] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:50.807 [2024-10-07 11:35:32.505685] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:50.807 [2024-10-07 11:35:32.505695] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:50.807 [2024-10-07 11:35:32.505704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:50.807 [2024-10-07 11:35:32.505714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:50.807 [2024-10-07 11:35:32.505723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:50.807 [2024-10-07 11:35:32.505732] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:50.807 [2024-10-07 11:35:32.505753] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:50.807 [2024-10-07 11:35:32.505766] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:50.807 [2024-10-07 11:35:32.505778] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:50.807 [2024-10-07 11:35:32.505789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:50.807 [2024-10-07 11:35:32.505799] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:50.807 [2024-10-07 11:35:32.505809] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:50.807 [2024-10-07 11:35:32.505820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:50.807 [2024-10-07 11:35:32.505830] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:50.807 [2024-10-07 11:35:32.505841] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:50.807 [2024-10-07 11:35:32.505852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:50.807 [2024-10-07 11:35:32.505871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:50.807 [2024-10-07 11:35:32.505882] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:50.807 [2024-10-07 11:35:32.505892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:50.807 [2024-10-07 11:35:32.505902] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:50.807 [2024-10-07 11:35:32.505912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:50.807 [2024-10-07 11:35:32.505922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:50.807 [2024-10-07 11:35:32.505933] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:50.807 [2024-10-07 11:35:32.505944] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:50.807 [2024-10-07 11:35:32.505956] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:50.807 [2024-10-07 11:35:32.505966] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:50.807 [2024-10-07 11:35:32.505976] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:50.807 [2024-10-07 11:35:32.505987] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:50.807 [2024-10-07 11:35:32.505997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.807 [2024-10-07 11:35:32.506008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:50.807 [2024-10-07 11:35:32.506018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.837 ms 00:24:50.807 [2024-10-07 11:35:32.506028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.067 [2024-10-07 11:35:32.551939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.067 [2024-10-07 11:35:32.552124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:51.067 [2024-10-07 11:35:32.552149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.931 ms 00:24:51.067 [2024-10-07 11:35:32.552161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.067 [2024-10-07 11:35:32.552258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.067 [2024-10-07 11:35:32.552270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:51.067 [2024-10-07 11:35:32.552281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:24:51.067 [2024-10-07 11:35:32.552291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.067 [2024-10-07 11:35:32.598970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.067 [2024-10-07 11:35:32.599144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:51.067 [2024-10-07 11:35:32.599196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.691 ms 00:24:51.067 [2024-10-07 11:35:32.599208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.067 [2024-10-07 11:35:32.599256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.067 [2024-10-07 11:35:32.599268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:51.067 [2024-10-07 11:35:32.599280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:51.067 [2024-10-07 11:35:32.599290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.067 [2024-10-07 11:35:32.599823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.067 [2024-10-07 11:35:32.599838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:51.067 [2024-10-07 11:35:32.599849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.446 ms 00:24:51.067 [2024-10-07 11:35:32.599869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.067 [2024-10-07 11:35:32.600010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.067 [2024-10-07 11:35:32.600025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:51.067 [2024-10-07 11:35:32.600036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:24:51.067 [2024-10-07 11:35:32.600047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.067 [2024-10-07 11:35:32.620238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.067 [2024-10-07 11:35:32.620280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:51.067 [2024-10-07 11:35:32.620296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.202 ms 00:24:51.067 [2024-10-07 11:35:32.620307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.067 [2024-10-07 11:35:32.640083] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:24:51.067 [2024-10-07 11:35:32.640127] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:51.067 [2024-10-07 11:35:32.640142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.067 [2024-10-07 11:35:32.640153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:51.067 [2024-10-07 11:35:32.640165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.742 ms 00:24:51.067 [2024-10-07 11:35:32.640175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.067 [2024-10-07 11:35:32.670263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.067 [2024-10-07 11:35:32.670441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:51.067 [2024-10-07 11:35:32.670483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.092 ms 00:24:51.067 [2024-10-07 11:35:32.670494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.067 [2024-10-07 11:35:32.688853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.067 [2024-10-07 11:35:32.688998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:51.067 [2024-10-07 11:35:32.689018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.288 ms 00:24:51.067 [2024-10-07 11:35:32.689030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.067 [2024-10-07 11:35:32.707376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.067 [2024-10-07 11:35:32.707413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:51.067 [2024-10-07 11:35:32.707426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.316 ms 00:24:51.067 [2024-10-07 11:35:32.707436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.067 [2024-10-07 11:35:32.708205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.067 [2024-10-07 11:35:32.708236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:51.067 [2024-10-07 11:35:32.708248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.655 ms 00:24:51.068 [2024-10-07 11:35:32.708258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.326 [2024-10-07 11:35:32.794110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.326 [2024-10-07 11:35:32.794354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:51.326 [2024-10-07 11:35:32.794382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.968 ms 00:24:51.326 [2024-10-07 11:35:32.794394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.326 [2024-10-07 11:35:32.805210] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:51.326 [2024-10-07 11:35:32.808198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.326 [2024-10-07 11:35:32.808229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:51.326 [2024-10-07 11:35:32.808243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.732 ms 00:24:51.326 [2024-10-07 11:35:32.808253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.326 [2024-10-07 11:35:32.808355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.326 [2024-10-07 11:35:32.808368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:51.326 [2024-10-07 11:35:32.808379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:51.326 [2024-10-07 11:35:32.808389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.326 [2024-10-07 11:35:32.808486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.326 [2024-10-07 11:35:32.808499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:51.326 [2024-10-07 11:35:32.808510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:24:51.326 [2024-10-07 11:35:32.808520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.326 [2024-10-07 11:35:32.808546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.326 [2024-10-07 11:35:32.808564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:51.326 [2024-10-07 11:35:32.808574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:51.326 [2024-10-07 11:35:32.808584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.326 [2024-10-07 11:35:32.808621] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:51.326 [2024-10-07 11:35:32.808634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.326 [2024-10-07 11:35:32.808644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:51.326 [2024-10-07 11:35:32.808654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:24:51.326 [2024-10-07 11:35:32.808664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.326 [2024-10-07 11:35:32.845206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.326 [2024-10-07 11:35:32.845340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:51.326 [2024-10-07 11:35:32.845415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.572 ms 00:24:51.326 [2024-10-07 11:35:32.845452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.326 [2024-10-07 11:35:32.845603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.326 [2024-10-07 11:35:32.845647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:51.326 [2024-10-07 11:35:32.845678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:24:51.326 [2024-10-07 11:35:32.845775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.326 [2024-10-07 11:35:32.846991] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 376.533 ms, result 0 00:24:52.316  [2024-10-07T11:35:34.961Z] Copying: 27/1024 [MB] (27 MBps) [2024-10-07T11:35:35.895Z] Copying: 54/1024 [MB] (26 MBps) [2024-10-07T11:35:37.271Z] Copying: 81/1024 [MB] (27 MBps) [2024-10-07T11:35:38.207Z] Copying: 109/1024 [MB] (27 MBps) [2024-10-07T11:35:39.142Z] Copying: 136/1024 [MB] (26 MBps) [2024-10-07T11:35:40.079Z] Copying: 162/1024 [MB] (26 MBps) [2024-10-07T11:35:41.031Z] Copying: 188/1024 [MB] (26 MBps) [2024-10-07T11:35:41.966Z] Copying: 216/1024 [MB] (27 MBps) [2024-10-07T11:35:42.904Z] Copying: 244/1024 [MB] (27 MBps) [2024-10-07T11:35:43.842Z] Copying: 271/1024 [MB] (27 MBps) [2024-10-07T11:35:45.230Z] Copying: 298/1024 [MB] (27 MBps) [2024-10-07T11:35:46.167Z] Copying: 325/1024 [MB] (26 MBps) [2024-10-07T11:35:47.104Z] Copying: 352/1024 [MB] (27 MBps) [2024-10-07T11:35:48.063Z] Copying: 380/1024 [MB] (27 MBps) [2024-10-07T11:35:49.000Z] Copying: 408/1024 [MB] (27 MBps) [2024-10-07T11:35:49.935Z] Copying: 437/1024 [MB] (28 MBps) [2024-10-07T11:35:50.871Z] Copying: 464/1024 [MB] (27 MBps) [2024-10-07T11:35:52.247Z] Copying: 492/1024 [MB] (28 MBps) [2024-10-07T11:35:53.180Z] Copying: 520/1024 [MB] (27 MBps) [2024-10-07T11:35:54.112Z] Copying: 546/1024 [MB] (26 MBps) [2024-10-07T11:35:55.044Z] Copying: 573/1024 [MB] (26 MBps) [2024-10-07T11:35:55.977Z] Copying: 599/1024 [MB] (26 MBps) [2024-10-07T11:35:56.913Z] Copying: 626/1024 [MB] (26 MBps) [2024-10-07T11:35:57.847Z] Copying: 652/1024 [MB] (26 MBps) [2024-10-07T11:35:59.222Z] Copying: 680/1024 [MB] (27 MBps) [2024-10-07T11:36:00.157Z] Copying: 707/1024 [MB] (27 MBps) [2024-10-07T11:36:01.090Z] Copying: 734/1024 [MB] (26 MBps) [2024-10-07T11:36:02.021Z] Copying: 763/1024 [MB] (28 MBps) [2024-10-07T11:36:02.955Z] Copying: 790/1024 [MB] (27 MBps) [2024-10-07T11:36:03.890Z] Copying: 817/1024 [MB] (26 MBps) [2024-10-07T11:36:04.877Z] Copying: 845/1024 [MB] (28 MBps) [2024-10-07T11:36:05.812Z] Copying: 875/1024 [MB] (30 MBps) [2024-10-07T11:36:07.188Z] Copying: 903/1024 [MB] (28 MBps) [2024-10-07T11:36:08.123Z] Copying: 930/1024 [MB] (26 MBps) [2024-10-07T11:36:09.062Z] Copying: 957/1024 [MB] (27 MBps) [2024-10-07T11:36:09.998Z] Copying: 984/1024 [MB] (26 MBps) [2024-10-07T11:36:10.257Z] Copying: 1012/1024 [MB] (27 MBps) [2024-10-07T11:36:10.257Z] Copying: 1024/1024 [MB] (average 27 MBps)[2024-10-07 11:36:10.226212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.546 [2024-10-07 11:36:10.226271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:28.546 [2024-10-07 11:36:10.226301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:28.546 [2024-10-07 11:36:10.226317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.546 [2024-10-07 11:36:10.226362] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:28.546 [2024-10-07 11:36:10.230498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.546 [2024-10-07 11:36:10.230538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:28.546 [2024-10-07 11:36:10.230551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.120 ms 00:25:28.546 [2024-10-07 11:36:10.230561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.546 [2024-10-07 11:36:10.232173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.546 [2024-10-07 11:36:10.232209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:28.546 [2024-10-07 11:36:10.232222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.588 ms 00:25:28.546 [2024-10-07 11:36:10.232233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.546 [2024-10-07 11:36:10.249585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.546 [2024-10-07 11:36:10.249631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:28.546 [2024-10-07 11:36:10.249644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.363 ms 00:25:28.546 [2024-10-07 11:36:10.249654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.546 [2024-10-07 11:36:10.254756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.546 [2024-10-07 11:36:10.254790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:28.546 [2024-10-07 11:36:10.254802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.077 ms 00:25:28.546 [2024-10-07 11:36:10.254812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.807 [2024-10-07 11:36:10.292134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.807 [2024-10-07 11:36:10.292173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:28.807 [2024-10-07 11:36:10.292187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.299 ms 00:25:28.807 [2024-10-07 11:36:10.292197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.807 [2024-10-07 11:36:10.312918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.807 [2024-10-07 11:36:10.312957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:28.807 [2024-10-07 11:36:10.312976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.717 ms 00:25:28.807 [2024-10-07 11:36:10.312987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.807 [2024-10-07 11:36:10.313123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.807 [2024-10-07 11:36:10.313139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:28.807 [2024-10-07 11:36:10.313151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:25:28.807 [2024-10-07 11:36:10.313161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.807 [2024-10-07 11:36:10.349248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.807 [2024-10-07 11:36:10.349394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:28.807 [2024-10-07 11:36:10.349416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.128 ms 00:25:28.807 [2024-10-07 11:36:10.349426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.807 [2024-10-07 11:36:10.385673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.808 [2024-10-07 11:36:10.385709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:28.808 [2024-10-07 11:36:10.385722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.203 ms 00:25:28.808 [2024-10-07 11:36:10.385733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.808 [2024-10-07 11:36:10.421971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.808 [2024-10-07 11:36:10.422009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:28.808 [2024-10-07 11:36:10.422023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.244 ms 00:25:28.808 [2024-10-07 11:36:10.422033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.808 [2024-10-07 11:36:10.457693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.808 [2024-10-07 11:36:10.457730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:28.808 [2024-10-07 11:36:10.457757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.645 ms 00:25:28.808 [2024-10-07 11:36:10.457768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.808 [2024-10-07 11:36:10.457803] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:28.808 [2024-10-07 11:36:10.457820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:28.808 [2024-10-07 11:36:10.457833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:28.808 [2024-10-07 11:36:10.457844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:28.808 [2024-10-07 11:36:10.457854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:28.808 [2024-10-07 11:36:10.457865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:28.808 [2024-10-07 11:36:10.457876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:28.808 [2024-10-07 11:36:10.457887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:28.808 [2024-10-07 11:36:10.457898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:28.808 [2024-10-07 11:36:10.457908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:28.808 [2024-10-07 11:36:10.457919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:28.808 [2024-10-07 11:36:10.457929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:28.808 [2024-10-07 11:36:10.457940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:28.808 [2024-10-07 11:36:10.457950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:28.808 [2024-10-07 11:36:10.457961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:28.808 [2024-10-07 11:36:10.457971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:28.808 [2024-10-07 11:36:10.457982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:28.808 [2024-10-07 11:36:10.457993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:28.808 [2024-10-07 11:36:10.458003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:28.808 [2024-10-07 11:36:10.458014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:28.808 [2024-10-07 11:36:10.458025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:28.808 [2024-10-07 11:36:10.458037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:28.808 [2024-10-07 11:36:10.458048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:28.808 [2024-10-07 11:36:10.458058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:28.808 [2024-10-07 11:36:10.458069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:28.808 [2024-10-07 11:36:10.458080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:28.808 [2024-10-07 11:36:10.458090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:28.808 [2024-10-07 11:36:10.458101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:28.808 [2024-10-07 11:36:10.458111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:28.808 [2024-10-07 11:36:10.458122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:28.808 [2024-10-07 11:36:10.458133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:28.808 [2024-10-07 11:36:10.458143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:28.808 [2024-10-07 11:36:10.458155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:28.808 [2024-10-07 11:36:10.458166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:28.808 [2024-10-07 11:36:10.458177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:28.808 [2024-10-07 11:36:10.458188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:28.808 [2024-10-07 11:36:10.458199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:28.808 [2024-10-07 11:36:10.458209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:28.808 [2024-10-07 11:36:10.458220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:28.808 [2024-10-07 11:36:10.458230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:28.808 [2024-10-07 11:36:10.458241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:28.808 [2024-10-07 11:36:10.458252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:28.808 [2024-10-07 11:36:10.458262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:28.808 [2024-10-07 11:36:10.458273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:28.808 [2024-10-07 11:36:10.458290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:28.808 [2024-10-07 11:36:10.458301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:28.808 [2024-10-07 11:36:10.458312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:28.808 [2024-10-07 11:36:10.458322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:28.808 [2024-10-07 11:36:10.458332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:28.808 [2024-10-07 11:36:10.458343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:28.808 [2024-10-07 11:36:10.458354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:28.808 [2024-10-07 11:36:10.458365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:28.808 [2024-10-07 11:36:10.458375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:28.808 [2024-10-07 11:36:10.458388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:28.808 [2024-10-07 11:36:10.458398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:28.808 [2024-10-07 11:36:10.458409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:28.808 [2024-10-07 11:36:10.458420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:28.808 [2024-10-07 11:36:10.458431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:28.808 [2024-10-07 11:36:10.458441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:28.808 [2024-10-07 11:36:10.458452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:28.808 [2024-10-07 11:36:10.458462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:28.808 [2024-10-07 11:36:10.458473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:28.808 [2024-10-07 11:36:10.458484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:28.808 [2024-10-07 11:36:10.458495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:28.808 [2024-10-07 11:36:10.458505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:28.808 [2024-10-07 11:36:10.458516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:28.808 [2024-10-07 11:36:10.458527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:28.808 [2024-10-07 11:36:10.458538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:28.808 [2024-10-07 11:36:10.458548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:28.808 [2024-10-07 11:36:10.458559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:28.808 [2024-10-07 11:36:10.458569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:28.808 [2024-10-07 11:36:10.458579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:28.808 [2024-10-07 11:36:10.458590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:28.808 [2024-10-07 11:36:10.458600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:28.808 [2024-10-07 11:36:10.458610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:28.808 [2024-10-07 11:36:10.458621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:28.808 [2024-10-07 11:36:10.458631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:28.809 [2024-10-07 11:36:10.458641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:28.809 [2024-10-07 11:36:10.458651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:28.809 [2024-10-07 11:36:10.458662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:28.809 [2024-10-07 11:36:10.458672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:28.809 [2024-10-07 11:36:10.458682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:28.809 [2024-10-07 11:36:10.458692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:28.809 [2024-10-07 11:36:10.458703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:28.809 [2024-10-07 11:36:10.458713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:28.809 [2024-10-07 11:36:10.458723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:28.809 [2024-10-07 11:36:10.458734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:28.809 [2024-10-07 11:36:10.458752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:28.809 [2024-10-07 11:36:10.458763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:28.809 [2024-10-07 11:36:10.458789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:28.809 [2024-10-07 11:36:10.458800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:28.809 [2024-10-07 11:36:10.458811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:28.809 [2024-10-07 11:36:10.458822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:28.809 [2024-10-07 11:36:10.458833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:28.809 [2024-10-07 11:36:10.458844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:28.809 [2024-10-07 11:36:10.458855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:28.809 [2024-10-07 11:36:10.458866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:28.809 [2024-10-07 11:36:10.458878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:28.809 [2024-10-07 11:36:10.458888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:28.809 [2024-10-07 11:36:10.458899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:28.809 [2024-10-07 11:36:10.458910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:28.809 [2024-10-07 11:36:10.458945] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:28.809 [2024-10-07 11:36:10.458961] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b8292fed-22fb-4cec-9b97-a2c299f43dd8 00:25:28.809 [2024-10-07 11:36:10.458972] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:28.809 [2024-10-07 11:36:10.458982] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:28.809 [2024-10-07 11:36:10.458992] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:28.809 [2024-10-07 11:36:10.459003] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:28.809 [2024-10-07 11:36:10.459012] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:28.809 [2024-10-07 11:36:10.459035] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:28.809 [2024-10-07 11:36:10.459045] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:28.809 [2024-10-07 11:36:10.459053] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:28.809 [2024-10-07 11:36:10.459062] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:28.809 [2024-10-07 11:36:10.459072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.809 [2024-10-07 11:36:10.459088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:28.809 [2024-10-07 11:36:10.459108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.272 ms 00:25:28.809 [2024-10-07 11:36:10.459118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.809 [2024-10-07 11:36:10.478537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.809 [2024-10-07 11:36:10.478572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:28.809 [2024-10-07 11:36:10.478585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.417 ms 00:25:28.809 [2024-10-07 11:36:10.478595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.809 [2024-10-07 11:36:10.479148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.809 [2024-10-07 11:36:10.479159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:28.809 [2024-10-07 11:36:10.479170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.528 ms 00:25:28.809 [2024-10-07 11:36:10.479179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.069 [2024-10-07 11:36:10.524801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.069 [2024-10-07 11:36:10.524840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:29.069 [2024-10-07 11:36:10.524854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.069 [2024-10-07 11:36:10.524864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.069 [2024-10-07 11:36:10.524927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.069 [2024-10-07 11:36:10.524938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:29.069 [2024-10-07 11:36:10.524948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.069 [2024-10-07 11:36:10.524958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.069 [2024-10-07 11:36:10.525023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.069 [2024-10-07 11:36:10.525036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:29.069 [2024-10-07 11:36:10.525047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.069 [2024-10-07 11:36:10.525056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.069 [2024-10-07 11:36:10.525073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.069 [2024-10-07 11:36:10.525088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:29.069 [2024-10-07 11:36:10.525098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.069 [2024-10-07 11:36:10.525108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.069 [2024-10-07 11:36:10.649660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.069 [2024-10-07 11:36:10.649907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:29.069 [2024-10-07 11:36:10.649932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.069 [2024-10-07 11:36:10.649943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.069 [2024-10-07 11:36:10.751510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.069 [2024-10-07 11:36:10.751570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:29.069 [2024-10-07 11:36:10.751586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.069 [2024-10-07 11:36:10.751596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.069 [2024-10-07 11:36:10.751690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.069 [2024-10-07 11:36:10.751702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:29.069 [2024-10-07 11:36:10.751713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.069 [2024-10-07 11:36:10.751724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.069 [2024-10-07 11:36:10.751798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.069 [2024-10-07 11:36:10.751811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:29.070 [2024-10-07 11:36:10.751825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.070 [2024-10-07 11:36:10.751835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.070 [2024-10-07 11:36:10.751954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.070 [2024-10-07 11:36:10.751968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:29.070 [2024-10-07 11:36:10.751979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.070 [2024-10-07 11:36:10.751990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.070 [2024-10-07 11:36:10.752025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.070 [2024-10-07 11:36:10.752037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:29.070 [2024-10-07 11:36:10.752052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.070 [2024-10-07 11:36:10.752062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.070 [2024-10-07 11:36:10.752099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.070 [2024-10-07 11:36:10.752110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:29.070 [2024-10-07 11:36:10.752120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.070 [2024-10-07 11:36:10.752130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.070 [2024-10-07 11:36:10.752170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.070 [2024-10-07 11:36:10.752181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:29.070 [2024-10-07 11:36:10.752195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.070 [2024-10-07 11:36:10.752204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.070 [2024-10-07 11:36:10.752345] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 526.946 ms, result 0 00:25:30.514 00:25:30.514 00:25:30.514 11:36:12 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:25:30.514 [2024-10-07 11:36:12.131173] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:25:30.514 [2024-10-07 11:36:12.131305] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77642 ] 00:25:30.773 [2024-10-07 11:36:12.303199] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:31.033 [2024-10-07 11:36:12.521288] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:25:31.292 [2024-10-07 11:36:12.877540] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:31.292 [2024-10-07 11:36:12.877610] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:31.552 [2024-10-07 11:36:13.038391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.552 [2024-10-07 11:36:13.038449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:31.552 [2024-10-07 11:36:13.038465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:31.552 [2024-10-07 11:36:13.038476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.552 [2024-10-07 11:36:13.038532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.552 [2024-10-07 11:36:13.038545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:31.552 [2024-10-07 11:36:13.038555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:25:31.552 [2024-10-07 11:36:13.038565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.552 [2024-10-07 11:36:13.038587] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:31.552 [2024-10-07 11:36:13.039502] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:31.552 [2024-10-07 11:36:13.039529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.552 [2024-10-07 11:36:13.039540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:31.552 [2024-10-07 11:36:13.039552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.948 ms 00:25:31.552 [2024-10-07 11:36:13.039561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.552 [2024-10-07 11:36:13.041022] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:31.552 [2024-10-07 11:36:13.059385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.552 [2024-10-07 11:36:13.059539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:31.552 [2024-10-07 11:36:13.059561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.393 ms 00:25:31.552 [2024-10-07 11:36:13.059571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.552 [2024-10-07 11:36:13.059640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.552 [2024-10-07 11:36:13.059656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:31.552 [2024-10-07 11:36:13.059668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:25:31.552 [2024-10-07 11:36:13.059678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.552 [2024-10-07 11:36:13.066303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.552 [2024-10-07 11:36:13.066442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:31.552 [2024-10-07 11:36:13.066461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.541 ms 00:25:31.552 [2024-10-07 11:36:13.066472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.552 [2024-10-07 11:36:13.066554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.552 [2024-10-07 11:36:13.066567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:31.552 [2024-10-07 11:36:13.066579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:25:31.552 [2024-10-07 11:36:13.066588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.552 [2024-10-07 11:36:13.066633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.552 [2024-10-07 11:36:13.066645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:31.552 [2024-10-07 11:36:13.066656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:31.552 [2024-10-07 11:36:13.066666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.552 [2024-10-07 11:36:13.066690] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:31.552 [2024-10-07 11:36:13.071587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.552 [2024-10-07 11:36:13.071617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:31.552 [2024-10-07 11:36:13.071629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.911 ms 00:25:31.552 [2024-10-07 11:36:13.071639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.552 [2024-10-07 11:36:13.071670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.552 [2024-10-07 11:36:13.071681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:31.552 [2024-10-07 11:36:13.071692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:31.552 [2024-10-07 11:36:13.071702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.552 [2024-10-07 11:36:13.071771] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:31.552 [2024-10-07 11:36:13.071795] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:31.552 [2024-10-07 11:36:13.071829] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:31.552 [2024-10-07 11:36:13.071847] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:31.552 [2024-10-07 11:36:13.071936] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:31.552 [2024-10-07 11:36:13.071950] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:31.552 [2024-10-07 11:36:13.071963] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:31.552 [2024-10-07 11:36:13.071980] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:31.552 [2024-10-07 11:36:13.071992] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:31.552 [2024-10-07 11:36:13.072004] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:31.552 [2024-10-07 11:36:13.072014] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:31.552 [2024-10-07 11:36:13.072026] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:31.552 [2024-10-07 11:36:13.072036] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:31.552 [2024-10-07 11:36:13.072046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.552 [2024-10-07 11:36:13.072056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:31.552 [2024-10-07 11:36:13.072066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:25:31.552 [2024-10-07 11:36:13.072076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.552 [2024-10-07 11:36:13.072150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.552 [2024-10-07 11:36:13.072164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:31.552 [2024-10-07 11:36:13.072174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:25:31.552 [2024-10-07 11:36:13.072184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.552 [2024-10-07 11:36:13.072279] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:31.553 [2024-10-07 11:36:13.072294] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:31.553 [2024-10-07 11:36:13.072304] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:31.553 [2024-10-07 11:36:13.072314] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:31.553 [2024-10-07 11:36:13.072325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:31.553 [2024-10-07 11:36:13.072334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:31.553 [2024-10-07 11:36:13.072345] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:31.553 [2024-10-07 11:36:13.072354] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:31.553 [2024-10-07 11:36:13.072363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:31.553 [2024-10-07 11:36:13.072372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:31.553 [2024-10-07 11:36:13.072382] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:31.553 [2024-10-07 11:36:13.072392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:31.553 [2024-10-07 11:36:13.072401] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:31.553 [2024-10-07 11:36:13.072420] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:31.553 [2024-10-07 11:36:13.072429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:31.553 [2024-10-07 11:36:13.072438] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:31.553 [2024-10-07 11:36:13.072449] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:31.553 [2024-10-07 11:36:13.072458] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:31.553 [2024-10-07 11:36:13.072467] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:31.553 [2024-10-07 11:36:13.072476] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:31.553 [2024-10-07 11:36:13.072485] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:31.553 [2024-10-07 11:36:13.072494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:31.553 [2024-10-07 11:36:13.072503] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:31.553 [2024-10-07 11:36:13.072513] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:31.553 [2024-10-07 11:36:13.072522] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:31.553 [2024-10-07 11:36:13.072530] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:31.553 [2024-10-07 11:36:13.072539] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:31.553 [2024-10-07 11:36:13.072548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:31.553 [2024-10-07 11:36:13.072556] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:31.553 [2024-10-07 11:36:13.072565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:31.553 [2024-10-07 11:36:13.072574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:31.553 [2024-10-07 11:36:13.072582] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:31.553 [2024-10-07 11:36:13.072592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:31.553 [2024-10-07 11:36:13.072601] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:31.553 [2024-10-07 11:36:13.072610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:31.553 [2024-10-07 11:36:13.072619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:31.553 [2024-10-07 11:36:13.072627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:31.553 [2024-10-07 11:36:13.072636] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:31.553 [2024-10-07 11:36:13.072646] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:31.553 [2024-10-07 11:36:13.072654] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:31.553 [2024-10-07 11:36:13.072663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:31.553 [2024-10-07 11:36:13.072672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:31.553 [2024-10-07 11:36:13.072683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:31.553 [2024-10-07 11:36:13.072692] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:31.553 [2024-10-07 11:36:13.072702] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:31.553 [2024-10-07 11:36:13.072715] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:31.553 [2024-10-07 11:36:13.072725] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:31.553 [2024-10-07 11:36:13.072735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:31.553 [2024-10-07 11:36:13.072758] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:31.553 [2024-10-07 11:36:13.072767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:31.553 [2024-10-07 11:36:13.072776] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:31.553 [2024-10-07 11:36:13.072785] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:31.553 [2024-10-07 11:36:13.072794] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:31.553 [2024-10-07 11:36:13.072805] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:31.553 [2024-10-07 11:36:13.072817] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:31.553 [2024-10-07 11:36:13.072829] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:31.553 [2024-10-07 11:36:13.072839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:31.553 [2024-10-07 11:36:13.072849] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:31.553 [2024-10-07 11:36:13.072860] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:31.553 [2024-10-07 11:36:13.072870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:31.553 [2024-10-07 11:36:13.072880] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:31.553 [2024-10-07 11:36:13.072891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:31.553 [2024-10-07 11:36:13.072901] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:31.553 [2024-10-07 11:36:13.072911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:31.553 [2024-10-07 11:36:13.072921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:31.553 [2024-10-07 11:36:13.072931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:31.553 [2024-10-07 11:36:13.072941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:31.553 [2024-10-07 11:36:13.072951] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:31.553 [2024-10-07 11:36:13.072961] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:31.553 [2024-10-07 11:36:13.072972] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:31.553 [2024-10-07 11:36:13.072983] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:31.553 [2024-10-07 11:36:13.072994] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:31.553 [2024-10-07 11:36:13.073004] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:31.553 [2024-10-07 11:36:13.073014] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:31.553 [2024-10-07 11:36:13.073026] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:31.553 [2024-10-07 11:36:13.073037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.553 [2024-10-07 11:36:13.073047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:31.553 [2024-10-07 11:36:13.073057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.813 ms 00:25:31.553 [2024-10-07 11:36:13.073067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.553 [2024-10-07 11:36:13.116122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.553 [2024-10-07 11:36:13.116168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:31.553 [2024-10-07 11:36:13.116183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.074 ms 00:25:31.553 [2024-10-07 11:36:13.116193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.553 [2024-10-07 11:36:13.116303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.553 [2024-10-07 11:36:13.116319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:31.553 [2024-10-07 11:36:13.116330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:25:31.553 [2024-10-07 11:36:13.116340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.553 [2024-10-07 11:36:13.162234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.553 [2024-10-07 11:36:13.162469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:31.553 [2024-10-07 11:36:13.162500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.899 ms 00:25:31.553 [2024-10-07 11:36:13.162511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.553 [2024-10-07 11:36:13.162564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.553 [2024-10-07 11:36:13.162574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:31.553 [2024-10-07 11:36:13.162585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:31.553 [2024-10-07 11:36:13.162595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.553 [2024-10-07 11:36:13.163106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.553 [2024-10-07 11:36:13.163122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:31.553 [2024-10-07 11:36:13.163133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.429 ms 00:25:31.553 [2024-10-07 11:36:13.163149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.553 [2024-10-07 11:36:13.163271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.553 [2024-10-07 11:36:13.163284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:31.553 [2024-10-07 11:36:13.163295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:25:31.553 [2024-10-07 11:36:13.163305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.553 [2024-10-07 11:36:13.180707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.554 [2024-10-07 11:36:13.180769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:31.554 [2024-10-07 11:36:13.180786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.408 ms 00:25:31.554 [2024-10-07 11:36:13.180798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.554 [2024-10-07 11:36:13.199734] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:31.554 [2024-10-07 11:36:13.199960] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:31.554 [2024-10-07 11:36:13.199982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.554 [2024-10-07 11:36:13.199994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:31.554 [2024-10-07 11:36:13.200008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.055 ms 00:25:31.554 [2024-10-07 11:36:13.200018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.554 [2024-10-07 11:36:13.230225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.554 [2024-10-07 11:36:13.230303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:31.554 [2024-10-07 11:36:13.230320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.201 ms 00:25:31.554 [2024-10-07 11:36:13.230332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.554 [2024-10-07 11:36:13.248789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.554 [2024-10-07 11:36:13.248829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:31.554 [2024-10-07 11:36:13.248844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.433 ms 00:25:31.554 [2024-10-07 11:36:13.248854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.813 [2024-10-07 11:36:13.267288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.813 [2024-10-07 11:36:13.267324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:31.813 [2024-10-07 11:36:13.267338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.408 ms 00:25:31.813 [2024-10-07 11:36:13.267348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.813 [2024-10-07 11:36:13.268148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.814 [2024-10-07 11:36:13.268177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:31.814 [2024-10-07 11:36:13.268190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.678 ms 00:25:31.814 [2024-10-07 11:36:13.268200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.814 [2024-10-07 11:36:13.351844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.814 [2024-10-07 11:36:13.351912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:31.814 [2024-10-07 11:36:13.351929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.754 ms 00:25:31.814 [2024-10-07 11:36:13.351940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.814 [2024-10-07 11:36:13.362856] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:31.814 [2024-10-07 11:36:13.365718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.814 [2024-10-07 11:36:13.365862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:31.814 [2024-10-07 11:36:13.365892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.746 ms 00:25:31.814 [2024-10-07 11:36:13.365904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.814 [2024-10-07 11:36:13.366009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.814 [2024-10-07 11:36:13.366023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:31.814 [2024-10-07 11:36:13.366034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:31.814 [2024-10-07 11:36:13.366044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.814 [2024-10-07 11:36:13.366137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.814 [2024-10-07 11:36:13.366153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:31.814 [2024-10-07 11:36:13.366164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:25:31.814 [2024-10-07 11:36:13.366179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.814 [2024-10-07 11:36:13.366202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.814 [2024-10-07 11:36:13.366213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:31.814 [2024-10-07 11:36:13.366224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:31.814 [2024-10-07 11:36:13.366234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.814 [2024-10-07 11:36:13.366268] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:31.814 [2024-10-07 11:36:13.366281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.814 [2024-10-07 11:36:13.366299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:31.814 [2024-10-07 11:36:13.366314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:31.814 [2024-10-07 11:36:13.366324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.814 [2024-10-07 11:36:13.402460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.814 [2024-10-07 11:36:13.402510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:31.814 [2024-10-07 11:36:13.402527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.171 ms 00:25:31.814 [2024-10-07 11:36:13.402538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.814 [2024-10-07 11:36:13.402626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.814 [2024-10-07 11:36:13.402639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:31.814 [2024-10-07 11:36:13.402651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:25:31.814 [2024-10-07 11:36:13.402665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.814 [2024-10-07 11:36:13.403927] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 365.645 ms, result 0 00:25:33.194  [2024-10-07T11:36:15.908Z] Copying: 27/1024 [MB] (27 MBps) [2024-10-07T11:36:16.848Z] Copying: 55/1024 [MB] (27 MBps) [2024-10-07T11:36:17.789Z] Copying: 83/1024 [MB] (27 MBps) [2024-10-07T11:36:18.726Z] Copying: 109/1024 [MB] (26 MBps) [2024-10-07T11:36:19.693Z] Copying: 136/1024 [MB] (26 MBps) [2024-10-07T11:36:20.630Z] Copying: 162/1024 [MB] (26 MBps) [2024-10-07T11:36:22.008Z] Copying: 190/1024 [MB] (27 MBps) [2024-10-07T11:36:22.955Z] Copying: 217/1024 [MB] (27 MBps) [2024-10-07T11:36:23.890Z] Copying: 245/1024 [MB] (28 MBps) [2024-10-07T11:36:24.824Z] Copying: 273/1024 [MB] (28 MBps) [2024-10-07T11:36:25.760Z] Copying: 302/1024 [MB] (28 MBps) [2024-10-07T11:36:26.693Z] Copying: 331/1024 [MB] (28 MBps) [2024-10-07T11:36:27.626Z] Copying: 358/1024 [MB] (27 MBps) [2024-10-07T11:36:29.009Z] Copying: 386/1024 [MB] (27 MBps) [2024-10-07T11:36:29.942Z] Copying: 416/1024 [MB] (29 MBps) [2024-10-07T11:36:30.876Z] Copying: 448/1024 [MB] (32 MBps) [2024-10-07T11:36:31.811Z] Copying: 480/1024 [MB] (31 MBps) [2024-10-07T11:36:32.744Z] Copying: 513/1024 [MB] (33 MBps) [2024-10-07T11:36:33.678Z] Copying: 546/1024 [MB] (33 MBps) [2024-10-07T11:36:34.612Z] Copying: 576/1024 [MB] (29 MBps) [2024-10-07T11:36:35.988Z] Copying: 606/1024 [MB] (30 MBps) [2024-10-07T11:36:36.922Z] Copying: 637/1024 [MB] (31 MBps) [2024-10-07T11:36:37.856Z] Copying: 668/1024 [MB] (30 MBps) [2024-10-07T11:36:38.794Z] Copying: 700/1024 [MB] (32 MBps) [2024-10-07T11:36:39.734Z] Copying: 733/1024 [MB] (32 MBps) [2024-10-07T11:36:40.670Z] Copying: 765/1024 [MB] (31 MBps) [2024-10-07T11:36:41.605Z] Copying: 796/1024 [MB] (31 MBps) [2024-10-07T11:36:43.010Z] Copying: 824/1024 [MB] (28 MBps) [2024-10-07T11:36:43.578Z] Copying: 851/1024 [MB] (27 MBps) [2024-10-07T11:36:44.955Z] Copying: 878/1024 [MB] (26 MBps) [2024-10-07T11:36:45.890Z] Copying: 908/1024 [MB] (29 MBps) [2024-10-07T11:36:46.826Z] Copying: 940/1024 [MB] (31 MBps) [2024-10-07T11:36:47.762Z] Copying: 975/1024 [MB] (34 MBps) [2024-10-07T11:36:48.336Z] Copying: 1008/1024 [MB] (33 MBps) [2024-10-07T11:36:48.603Z] Copying: 1024/1024 [MB] (average 29 MBps)[2024-10-07 11:36:48.511052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.892 [2024-10-07 11:36:48.511125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:06.892 [2024-10-07 11:36:48.511143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:06.892 [2024-10-07 11:36:48.511161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.892 [2024-10-07 11:36:48.511187] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:06.892 [2024-10-07 11:36:48.516243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.892 [2024-10-07 11:36:48.516284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:06.892 [2024-10-07 11:36:48.516299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.043 ms 00:26:06.892 [2024-10-07 11:36:48.516309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.892 [2024-10-07 11:36:48.516524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.892 [2024-10-07 11:36:48.516542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:06.892 [2024-10-07 11:36:48.516553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.181 ms 00:26:06.892 [2024-10-07 11:36:48.516568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.892 [2024-10-07 11:36:48.519834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.892 [2024-10-07 11:36:48.519867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:06.892 [2024-10-07 11:36:48.519880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.254 ms 00:26:06.892 [2024-10-07 11:36:48.519892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.892 [2024-10-07 11:36:48.525549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.892 [2024-10-07 11:36:48.525586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:06.892 [2024-10-07 11:36:48.525598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.645 ms 00:26:06.892 [2024-10-07 11:36:48.525608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.892 [2024-10-07 11:36:48.567863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.892 [2024-10-07 11:36:48.567917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:06.892 [2024-10-07 11:36:48.567933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.240 ms 00:26:06.892 [2024-10-07 11:36:48.567945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.892 [2024-10-07 11:36:48.590735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.892 [2024-10-07 11:36:48.590796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:06.892 [2024-10-07 11:36:48.590813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.774 ms 00:26:06.892 [2024-10-07 11:36:48.590824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.892 [2024-10-07 11:36:48.590975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.892 [2024-10-07 11:36:48.590990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:06.892 [2024-10-07 11:36:48.591003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:26:06.892 [2024-10-07 11:36:48.591014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.153 [2024-10-07 11:36:48.630888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.153 [2024-10-07 11:36:48.630933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:07.153 [2024-10-07 11:36:48.630948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.920 ms 00:26:07.153 [2024-10-07 11:36:48.630959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.153 [2024-10-07 11:36:48.670605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.153 [2024-10-07 11:36:48.670653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:07.153 [2024-10-07 11:36:48.670669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.665 ms 00:26:07.153 [2024-10-07 11:36:48.670681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.153 [2024-10-07 11:36:48.710048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.153 [2024-10-07 11:36:48.710089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:07.153 [2024-10-07 11:36:48.710104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.388 ms 00:26:07.153 [2024-10-07 11:36:48.710115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.153 [2024-10-07 11:36:48.749513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.153 [2024-10-07 11:36:48.749559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:07.153 [2024-10-07 11:36:48.749575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.374 ms 00:26:07.153 [2024-10-07 11:36:48.749586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.153 [2024-10-07 11:36:48.749629] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:07.153 [2024-10-07 11:36:48.749647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:07.153 [2024-10-07 11:36:48.749661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:07.153 [2024-10-07 11:36:48.749674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:07.153 [2024-10-07 11:36:48.749686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:07.153 [2024-10-07 11:36:48.749698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:07.153 [2024-10-07 11:36:48.749709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:07.153 [2024-10-07 11:36:48.749721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:07.153 [2024-10-07 11:36:48.749732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:07.153 [2024-10-07 11:36:48.749753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:07.153 [2024-10-07 11:36:48.749765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:07.153 [2024-10-07 11:36:48.749776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:07.153 [2024-10-07 11:36:48.749788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:07.153 [2024-10-07 11:36:48.749799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:07.153 [2024-10-07 11:36:48.749811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:07.153 [2024-10-07 11:36:48.749822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:07.153 [2024-10-07 11:36:48.749834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:07.153 [2024-10-07 11:36:48.749845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:07.153 [2024-10-07 11:36:48.749856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:07.153 [2024-10-07 11:36:48.749874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:07.153 [2024-10-07 11:36:48.749885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:07.153 [2024-10-07 11:36:48.749897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:07.153 [2024-10-07 11:36:48.749908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:07.153 [2024-10-07 11:36:48.749919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:07.153 [2024-10-07 11:36:48.749931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:07.153 [2024-10-07 11:36:48.749942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:07.153 [2024-10-07 11:36:48.749954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:07.153 [2024-10-07 11:36:48.749965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:07.153 [2024-10-07 11:36:48.749977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:07.153 [2024-10-07 11:36:48.749989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:07.153 [2024-10-07 11:36:48.750012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:07.153 [2024-10-07 11:36:48.750024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:07.153 [2024-10-07 11:36:48.750035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:07.153 [2024-10-07 11:36:48.750047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:07.153 [2024-10-07 11:36:48.750057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:07.153 [2024-10-07 11:36:48.750069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:07.153 [2024-10-07 11:36:48.750080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:07.153 [2024-10-07 11:36:48.750091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:07.153 [2024-10-07 11:36:48.750103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:07.153 [2024-10-07 11:36:48.750114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:07.153 [2024-10-07 11:36:48.750141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:07.153 [2024-10-07 11:36:48.750153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:07.153 [2024-10-07 11:36:48.750164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:07.153 [2024-10-07 11:36:48.750175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:07.153 [2024-10-07 11:36:48.750187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:07.153 [2024-10-07 11:36:48.750198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:07.153 [2024-10-07 11:36:48.750209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:07.153 [2024-10-07 11:36:48.750220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:07.153 [2024-10-07 11:36:48.750232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:07.153 [2024-10-07 11:36:48.750243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:07.153 [2024-10-07 11:36:48.750254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:07.153 [2024-10-07 11:36:48.750265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:07.153 [2024-10-07 11:36:48.750276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:07.153 [2024-10-07 11:36:48.750302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:07.153 [2024-10-07 11:36:48.750314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:07.154 [2024-10-07 11:36:48.750325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:07.154 [2024-10-07 11:36:48.750337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:07.154 [2024-10-07 11:36:48.750348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:07.154 [2024-10-07 11:36:48.750359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:07.154 [2024-10-07 11:36:48.750370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:07.154 [2024-10-07 11:36:48.750382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:07.154 [2024-10-07 11:36:48.750393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:07.154 [2024-10-07 11:36:48.750404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:07.154 [2024-10-07 11:36:48.750416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:07.154 [2024-10-07 11:36:48.750428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:07.154 [2024-10-07 11:36:48.750439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:07.154 [2024-10-07 11:36:48.750451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:07.154 [2024-10-07 11:36:48.750473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:07.154 [2024-10-07 11:36:48.750484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:07.154 [2024-10-07 11:36:48.750496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:07.154 [2024-10-07 11:36:48.750507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:07.154 [2024-10-07 11:36:48.750518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:07.154 [2024-10-07 11:36:48.750529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:07.154 [2024-10-07 11:36:48.750540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:07.154 [2024-10-07 11:36:48.750551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:07.154 [2024-10-07 11:36:48.750562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:07.154 [2024-10-07 11:36:48.750574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:07.154 [2024-10-07 11:36:48.750585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:07.154 [2024-10-07 11:36:48.750596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:07.154 [2024-10-07 11:36:48.750607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:07.154 [2024-10-07 11:36:48.750618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:07.154 [2024-10-07 11:36:48.750628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:07.154 [2024-10-07 11:36:48.750639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:07.154 [2024-10-07 11:36:48.750650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:07.154 [2024-10-07 11:36:48.750661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:07.154 [2024-10-07 11:36:48.750673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:07.154 [2024-10-07 11:36:48.750683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:07.154 [2024-10-07 11:36:48.750694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:07.154 [2024-10-07 11:36:48.750705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:07.154 [2024-10-07 11:36:48.750716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:07.154 [2024-10-07 11:36:48.750727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:07.154 [2024-10-07 11:36:48.750738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:07.154 [2024-10-07 11:36:48.750761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:07.154 [2024-10-07 11:36:48.750773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:07.154 [2024-10-07 11:36:48.750784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:07.154 [2024-10-07 11:36:48.750795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:07.154 [2024-10-07 11:36:48.750807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:07.154 [2024-10-07 11:36:48.750819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:07.154 [2024-10-07 11:36:48.750846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:07.154 [2024-10-07 11:36:48.750858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:07.154 [2024-10-07 11:36:48.750869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:07.154 [2024-10-07 11:36:48.750889] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:07.154 [2024-10-07 11:36:48.750899] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b8292fed-22fb-4cec-9b97-a2c299f43dd8 00:26:07.154 [2024-10-07 11:36:48.750911] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:07.154 [2024-10-07 11:36:48.750922] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:07.154 [2024-10-07 11:36:48.750932] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:07.154 [2024-10-07 11:36:48.750943] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:07.154 [2024-10-07 11:36:48.750953] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:07.154 [2024-10-07 11:36:48.750970] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:07.154 [2024-10-07 11:36:48.750982] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:07.154 [2024-10-07 11:36:48.750992] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:07.154 [2024-10-07 11:36:48.751001] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:07.154 [2024-10-07 11:36:48.751011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.154 [2024-10-07 11:36:48.751034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:07.154 [2024-10-07 11:36:48.751046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.386 ms 00:26:07.154 [2024-10-07 11:36:48.751057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.154 [2024-10-07 11:36:48.771271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.154 [2024-10-07 11:36:48.771323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:07.154 [2024-10-07 11:36:48.771338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.203 ms 00:26:07.154 [2024-10-07 11:36:48.771358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.154 [2024-10-07 11:36:48.771979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.154 [2024-10-07 11:36:48.772001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:07.154 [2024-10-07 11:36:48.772021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.586 ms 00:26:07.154 [2024-10-07 11:36:48.772032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.154 [2024-10-07 11:36:48.815729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:07.154 [2024-10-07 11:36:48.815787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:07.154 [2024-10-07 11:36:48.815806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:07.154 [2024-10-07 11:36:48.815817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.154 [2024-10-07 11:36:48.815883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:07.154 [2024-10-07 11:36:48.815896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:07.154 [2024-10-07 11:36:48.815906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:07.154 [2024-10-07 11:36:48.815916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.154 [2024-10-07 11:36:48.815990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:07.154 [2024-10-07 11:36:48.816004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:07.154 [2024-10-07 11:36:48.816015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:07.154 [2024-10-07 11:36:48.816029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.154 [2024-10-07 11:36:48.816047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:07.154 [2024-10-07 11:36:48.816057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:07.154 [2024-10-07 11:36:48.816068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:07.154 [2024-10-07 11:36:48.816078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.414 [2024-10-07 11:36:48.940620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:07.414 [2024-10-07 11:36:48.940673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:07.414 [2024-10-07 11:36:48.940695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:07.414 [2024-10-07 11:36:48.940706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.414 [2024-10-07 11:36:49.044124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:07.414 [2024-10-07 11:36:49.044187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:07.414 [2024-10-07 11:36:49.044202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:07.414 [2024-10-07 11:36:49.044213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.414 [2024-10-07 11:36:49.044307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:07.414 [2024-10-07 11:36:49.044319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:07.414 [2024-10-07 11:36:49.044330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:07.414 [2024-10-07 11:36:49.044340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.414 [2024-10-07 11:36:49.044395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:07.414 [2024-10-07 11:36:49.044407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:07.414 [2024-10-07 11:36:49.044417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:07.414 [2024-10-07 11:36:49.044427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.414 [2024-10-07 11:36:49.044553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:07.414 [2024-10-07 11:36:49.044567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:07.414 [2024-10-07 11:36:49.044578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:07.414 [2024-10-07 11:36:49.044588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.414 [2024-10-07 11:36:49.044630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:07.414 [2024-10-07 11:36:49.044648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:07.414 [2024-10-07 11:36:49.044659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:07.414 [2024-10-07 11:36:49.044669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.414 [2024-10-07 11:36:49.044707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:07.414 [2024-10-07 11:36:49.044718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:07.414 [2024-10-07 11:36:49.044728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:07.414 [2024-10-07 11:36:49.044752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.414 [2024-10-07 11:36:49.044800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:07.414 [2024-10-07 11:36:49.044812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:07.414 [2024-10-07 11:36:49.044823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:07.414 [2024-10-07 11:36:49.044832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.414 [2024-10-07 11:36:49.044953] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 534.743 ms, result 0 00:26:08.838 00:26:08.838 00:26:08.838 11:36:50 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:10.778 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:26:10.778 11:36:52 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:26:10.778 [2024-10-07 11:36:52.360733] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:26:10.778 [2024-10-07 11:36:52.360879] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78052 ] 00:26:11.036 [2024-10-07 11:36:52.535332] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:11.294 [2024-10-07 11:36:52.768561] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:26:11.552 [2024-10-07 11:36:53.154072] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:11.552 [2024-10-07 11:36:53.154143] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:11.811 [2024-10-07 11:36:53.316972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.811 [2024-10-07 11:36:53.317031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:11.811 [2024-10-07 11:36:53.317048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:11.811 [2024-10-07 11:36:53.317060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.811 [2024-10-07 11:36:53.317119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.811 [2024-10-07 11:36:53.317132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:11.811 [2024-10-07 11:36:53.317144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:26:11.811 [2024-10-07 11:36:53.317155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.811 [2024-10-07 11:36:53.317178] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:11.811 [2024-10-07 11:36:53.318252] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:11.811 [2024-10-07 11:36:53.318304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.811 [2024-10-07 11:36:53.318317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:11.811 [2024-10-07 11:36:53.318329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.133 ms 00:26:11.811 [2024-10-07 11:36:53.318341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.811 [2024-10-07 11:36:53.319873] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:11.811 [2024-10-07 11:36:53.340646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.811 [2024-10-07 11:36:53.340695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:11.811 [2024-10-07 11:36:53.340722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.808 ms 00:26:11.811 [2024-10-07 11:36:53.340750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.811 [2024-10-07 11:36:53.340833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.811 [2024-10-07 11:36:53.340847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:11.811 [2024-10-07 11:36:53.340858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:26:11.811 [2024-10-07 11:36:53.340868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.811 [2024-10-07 11:36:53.348195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.811 [2024-10-07 11:36:53.348230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:11.811 [2024-10-07 11:36:53.348243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.265 ms 00:26:11.811 [2024-10-07 11:36:53.348254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.811 [2024-10-07 11:36:53.348337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.811 [2024-10-07 11:36:53.348351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:11.811 [2024-10-07 11:36:53.348379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:26:11.811 [2024-10-07 11:36:53.348390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.811 [2024-10-07 11:36:53.348437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.811 [2024-10-07 11:36:53.348449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:11.811 [2024-10-07 11:36:53.348461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:26:11.811 [2024-10-07 11:36:53.348471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.811 [2024-10-07 11:36:53.348498] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:11.811 [2024-10-07 11:36:53.353622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.811 [2024-10-07 11:36:53.353657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:11.811 [2024-10-07 11:36:53.353670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.137 ms 00:26:11.811 [2024-10-07 11:36:53.353681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.811 [2024-10-07 11:36:53.353715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.811 [2024-10-07 11:36:53.353726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:11.811 [2024-10-07 11:36:53.353747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:11.811 [2024-10-07 11:36:53.353758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.811 [2024-10-07 11:36:53.353818] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:11.811 [2024-10-07 11:36:53.353843] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:11.811 [2024-10-07 11:36:53.353881] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:11.811 [2024-10-07 11:36:53.353900] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:11.811 [2024-10-07 11:36:53.353994] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:11.811 [2024-10-07 11:36:53.354007] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:11.811 [2024-10-07 11:36:53.354021] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:11.811 [2024-10-07 11:36:53.354039] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:11.811 [2024-10-07 11:36:53.354052] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:11.812 [2024-10-07 11:36:53.354064] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:11.812 [2024-10-07 11:36:53.354075] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:11.812 [2024-10-07 11:36:53.354086] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:11.812 [2024-10-07 11:36:53.354096] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:11.812 [2024-10-07 11:36:53.354107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.812 [2024-10-07 11:36:53.354118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:11.812 [2024-10-07 11:36:53.354129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:26:11.812 [2024-10-07 11:36:53.354139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.812 [2024-10-07 11:36:53.354219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.812 [2024-10-07 11:36:53.354234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:11.812 [2024-10-07 11:36:53.354246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:26:11.812 [2024-10-07 11:36:53.354256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.812 [2024-10-07 11:36:53.354383] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:11.812 [2024-10-07 11:36:53.354400] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:11.812 [2024-10-07 11:36:53.354412] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:11.812 [2024-10-07 11:36:53.354423] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:11.812 [2024-10-07 11:36:53.354434] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:11.812 [2024-10-07 11:36:53.354445] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:11.812 [2024-10-07 11:36:53.354455] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:11.812 [2024-10-07 11:36:53.354465] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:11.812 [2024-10-07 11:36:53.354475] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:11.812 [2024-10-07 11:36:53.354485] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:11.812 [2024-10-07 11:36:53.354495] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:11.812 [2024-10-07 11:36:53.354507] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:11.812 [2024-10-07 11:36:53.354517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:11.812 [2024-10-07 11:36:53.354537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:11.812 [2024-10-07 11:36:53.354548] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:11.812 [2024-10-07 11:36:53.354558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:11.812 [2024-10-07 11:36:53.354569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:11.812 [2024-10-07 11:36:53.354579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:11.812 [2024-10-07 11:36:53.354589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:11.812 [2024-10-07 11:36:53.354599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:11.812 [2024-10-07 11:36:53.354609] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:11.812 [2024-10-07 11:36:53.354620] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:11.812 [2024-10-07 11:36:53.354636] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:11.812 [2024-10-07 11:36:53.354652] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:11.812 [2024-10-07 11:36:53.354662] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:11.812 [2024-10-07 11:36:53.354671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:11.812 [2024-10-07 11:36:53.354682] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:11.812 [2024-10-07 11:36:53.354692] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:11.812 [2024-10-07 11:36:53.354702] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:11.812 [2024-10-07 11:36:53.354712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:11.812 [2024-10-07 11:36:53.354722] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:11.812 [2024-10-07 11:36:53.354732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:11.812 [2024-10-07 11:36:53.354742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:11.812 [2024-10-07 11:36:53.354752] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:11.812 [2024-10-07 11:36:53.354774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:11.812 [2024-10-07 11:36:53.354784] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:11.812 [2024-10-07 11:36:53.354794] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:11.812 [2024-10-07 11:36:53.354804] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:11.812 [2024-10-07 11:36:53.354814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:11.812 [2024-10-07 11:36:53.354825] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:11.812 [2024-10-07 11:36:53.354835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:11.812 [2024-10-07 11:36:53.354845] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:11.812 [2024-10-07 11:36:53.354855] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:11.812 [2024-10-07 11:36:53.354866] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:11.812 [2024-10-07 11:36:53.354877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:11.812 [2024-10-07 11:36:53.354893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:11.812 [2024-10-07 11:36:53.354904] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:11.812 [2024-10-07 11:36:53.354915] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:11.812 [2024-10-07 11:36:53.354925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:11.812 [2024-10-07 11:36:53.354935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:11.812 [2024-10-07 11:36:53.354945] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:11.812 [2024-10-07 11:36:53.354955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:11.812 [2024-10-07 11:36:53.354965] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:11.812 [2024-10-07 11:36:53.354977] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:11.812 [2024-10-07 11:36:53.354991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:11.812 [2024-10-07 11:36:53.355004] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:11.812 [2024-10-07 11:36:53.355015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:11.812 [2024-10-07 11:36:53.355026] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:11.812 [2024-10-07 11:36:53.355037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:11.812 [2024-10-07 11:36:53.355049] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:11.812 [2024-10-07 11:36:53.355060] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:11.812 [2024-10-07 11:36:53.355071] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:11.812 [2024-10-07 11:36:53.355082] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:11.812 [2024-10-07 11:36:53.355093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:11.812 [2024-10-07 11:36:53.355104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:11.812 [2024-10-07 11:36:53.355116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:11.812 [2024-10-07 11:36:53.355126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:11.812 [2024-10-07 11:36:53.355138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:11.812 [2024-10-07 11:36:53.355149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:11.812 [2024-10-07 11:36:53.355160] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:11.812 [2024-10-07 11:36:53.355172] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:11.812 [2024-10-07 11:36:53.355184] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:11.812 [2024-10-07 11:36:53.355195] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:11.812 [2024-10-07 11:36:53.355206] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:11.812 [2024-10-07 11:36:53.355217] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:11.812 [2024-10-07 11:36:53.355230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.812 [2024-10-07 11:36:53.355241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:11.812 [2024-10-07 11:36:53.355252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.923 ms 00:26:11.812 [2024-10-07 11:36:53.355263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.812 [2024-10-07 11:36:53.405203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.812 [2024-10-07 11:36:53.405248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:11.812 [2024-10-07 11:36:53.405262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.966 ms 00:26:11.812 [2024-10-07 11:36:53.405273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.812 [2024-10-07 11:36:53.405365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.812 [2024-10-07 11:36:53.405377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:11.812 [2024-10-07 11:36:53.405388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:26:11.812 [2024-10-07 11:36:53.405398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.812 [2024-10-07 11:36:53.454333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.812 [2024-10-07 11:36:53.454376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:11.812 [2024-10-07 11:36:53.454394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.945 ms 00:26:11.812 [2024-10-07 11:36:53.454405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.812 [2024-10-07 11:36:53.454450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.813 [2024-10-07 11:36:53.454461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:11.813 [2024-10-07 11:36:53.454472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:11.813 [2024-10-07 11:36:53.454482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.813 [2024-10-07 11:36:53.455011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.813 [2024-10-07 11:36:53.455035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:11.813 [2024-10-07 11:36:53.455047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.448 ms 00:26:11.813 [2024-10-07 11:36:53.455065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.813 [2024-10-07 11:36:53.455189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.813 [2024-10-07 11:36:53.455210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:11.813 [2024-10-07 11:36:53.455221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:26:11.813 [2024-10-07 11:36:53.455232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.813 [2024-10-07 11:36:53.475664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.813 [2024-10-07 11:36:53.475703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:11.813 [2024-10-07 11:36:53.475718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.442 ms 00:26:11.813 [2024-10-07 11:36:53.475729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.813 [2024-10-07 11:36:53.496671] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:11.813 [2024-10-07 11:36:53.496729] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:11.813 [2024-10-07 11:36:53.496756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.813 [2024-10-07 11:36:53.496769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:11.813 [2024-10-07 11:36:53.496782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.938 ms 00:26:11.813 [2024-10-07 11:36:53.496792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.071 [2024-10-07 11:36:53.529260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.071 [2024-10-07 11:36:53.529303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:12.071 [2024-10-07 11:36:53.529317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.475 ms 00:26:12.071 [2024-10-07 11:36:53.529327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.071 [2024-10-07 11:36:53.548551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.071 [2024-10-07 11:36:53.548589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:12.071 [2024-10-07 11:36:53.548603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.186 ms 00:26:12.071 [2024-10-07 11:36:53.548614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.071 [2024-10-07 11:36:53.567595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.071 [2024-10-07 11:36:53.567632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:12.071 [2024-10-07 11:36:53.567661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.958 ms 00:26:12.071 [2024-10-07 11:36:53.567672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.071 [2024-10-07 11:36:53.568618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.071 [2024-10-07 11:36:53.568653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:12.071 [2024-10-07 11:36:53.568667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.818 ms 00:26:12.071 [2024-10-07 11:36:53.568677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.071 [2024-10-07 11:36:53.662528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.071 [2024-10-07 11:36:53.662585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:12.071 [2024-10-07 11:36:53.662603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 93.978 ms 00:26:12.071 [2024-10-07 11:36:53.662615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.071 [2024-10-07 11:36:53.676463] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:12.071 [2024-10-07 11:36:53.679891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.071 [2024-10-07 11:36:53.679934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:12.071 [2024-10-07 11:36:53.679950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.220 ms 00:26:12.071 [2024-10-07 11:36:53.679982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.071 [2024-10-07 11:36:53.680095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.071 [2024-10-07 11:36:53.680113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:12.071 [2024-10-07 11:36:53.680125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:12.071 [2024-10-07 11:36:53.680152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.071 [2024-10-07 11:36:53.680266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.071 [2024-10-07 11:36:53.680283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:12.071 [2024-10-07 11:36:53.680295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:26:12.071 [2024-10-07 11:36:53.680306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.071 [2024-10-07 11:36:53.680337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.071 [2024-10-07 11:36:53.680349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:12.071 [2024-10-07 11:36:53.680360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:12.071 [2024-10-07 11:36:53.680371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.071 [2024-10-07 11:36:53.680408] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:12.071 [2024-10-07 11:36:53.680425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.071 [2024-10-07 11:36:53.680436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:12.071 [2024-10-07 11:36:53.680446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:26:12.071 [2024-10-07 11:36:53.680460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.071 [2024-10-07 11:36:53.720468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.071 [2024-10-07 11:36:53.720523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:12.071 [2024-10-07 11:36:53.720539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.050 ms 00:26:12.071 [2024-10-07 11:36:53.720567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.071 [2024-10-07 11:36:53.720682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.071 [2024-10-07 11:36:53.720698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:12.071 [2024-10-07 11:36:53.720710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:26:12.071 [2024-10-07 11:36:53.720720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.071 [2024-10-07 11:36:53.721976] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 405.150 ms, result 0 00:26:13.446  [2024-10-07T11:36:56.094Z] Copying: 29/1024 [MB] (29 MBps) [2024-10-07T11:36:57.032Z] Copying: 56/1024 [MB] (27 MBps) [2024-10-07T11:36:57.969Z] Copying: 83/1024 [MB] (26 MBps) [2024-10-07T11:36:58.905Z] Copying: 109/1024 [MB] (26 MBps) [2024-10-07T11:36:59.841Z] Copying: 138/1024 [MB] (28 MBps) [2024-10-07T11:37:00.776Z] Copying: 171/1024 [MB] (32 MBps) [2024-10-07T11:37:02.153Z] Copying: 201/1024 [MB] (30 MBps) [2024-10-07T11:37:02.721Z] Copying: 228/1024 [MB] (26 MBps) [2024-10-07T11:37:04.099Z] Copying: 254/1024 [MB] (25 MBps) [2024-10-07T11:37:05.037Z] Copying: 279/1024 [MB] (25 MBps) [2024-10-07T11:37:05.973Z] Copying: 306/1024 [MB] (26 MBps) [2024-10-07T11:37:06.909Z] Copying: 332/1024 [MB] (26 MBps) [2024-10-07T11:37:07.844Z] Copying: 359/1024 [MB] (26 MBps) [2024-10-07T11:37:08.777Z] Copying: 386/1024 [MB] (26 MBps) [2024-10-07T11:37:09.712Z] Copying: 413/1024 [MB] (26 MBps) [2024-10-07T11:37:11.088Z] Copying: 440/1024 [MB] (26 MBps) [2024-10-07T11:37:12.022Z] Copying: 467/1024 [MB] (27 MBps) [2024-10-07T11:37:12.958Z] Copying: 493/1024 [MB] (25 MBps) [2024-10-07T11:37:13.892Z] Copying: 518/1024 [MB] (25 MBps) [2024-10-07T11:37:14.886Z] Copying: 545/1024 [MB] (26 MBps) [2024-10-07T11:37:15.824Z] Copying: 572/1024 [MB] (27 MBps) [2024-10-07T11:37:16.759Z] Copying: 598/1024 [MB] (26 MBps) [2024-10-07T11:37:18.134Z] Copying: 621/1024 [MB] (22 MBps) [2024-10-07T11:37:18.702Z] Copying: 648/1024 [MB] (27 MBps) [2024-10-07T11:37:20.087Z] Copying: 675/1024 [MB] (26 MBps) [2024-10-07T11:37:21.023Z] Copying: 702/1024 [MB] (27 MBps) [2024-10-07T11:37:21.957Z] Copying: 729/1024 [MB] (27 MBps) [2024-10-07T11:37:22.892Z] Copying: 756/1024 [MB] (27 MBps) [2024-10-07T11:37:23.825Z] Copying: 783/1024 [MB] (26 MBps) [2024-10-07T11:37:24.761Z] Copying: 810/1024 [MB] (26 MBps) [2024-10-07T11:37:25.697Z] Copying: 837/1024 [MB] (27 MBps) [2024-10-07T11:37:27.074Z] Copying: 864/1024 [MB] (27 MBps) [2024-10-07T11:37:28.009Z] Copying: 891/1024 [MB] (26 MBps) [2024-10-07T11:37:28.948Z] Copying: 917/1024 [MB] (26 MBps) [2024-10-07T11:37:29.888Z] Copying: 943/1024 [MB] (26 MBps) [2024-10-07T11:37:30.822Z] Copying: 970/1024 [MB] (26 MBps) [2024-10-07T11:37:31.757Z] Copying: 998/1024 [MB] (27 MBps) [2024-10-07T11:37:32.691Z] Copying: 1023/1024 [MB] (24 MBps) [2024-10-07T11:37:32.691Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-10-07 11:37:32.389394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.980 [2024-10-07 11:37:32.389460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:50.980 [2024-10-07 11:37:32.389477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:50.980 [2024-10-07 11:37:32.389488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.980 [2024-10-07 11:37:32.391187] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:50.980 [2024-10-07 11:37:32.397802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.980 [2024-10-07 11:37:32.397846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:50.980 [2024-10-07 11:37:32.397863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.580 ms 00:26:50.980 [2024-10-07 11:37:32.397883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.980 [2024-10-07 11:37:32.408600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.980 [2024-10-07 11:37:32.408646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:50.980 [2024-10-07 11:37:32.408661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.846 ms 00:26:50.980 [2024-10-07 11:37:32.408672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.980 [2024-10-07 11:37:32.432026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.980 [2024-10-07 11:37:32.432073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:50.980 [2024-10-07 11:37:32.432088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.374 ms 00:26:50.980 [2024-10-07 11:37:32.432100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.980 [2024-10-07 11:37:32.437104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.980 [2024-10-07 11:37:32.437140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:50.980 [2024-10-07 11:37:32.437153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.969 ms 00:26:50.981 [2024-10-07 11:37:32.437162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.981 [2024-10-07 11:37:32.474414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.981 [2024-10-07 11:37:32.474481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:50.981 [2024-10-07 11:37:32.474499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.233 ms 00:26:50.981 [2024-10-07 11:37:32.474510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.981 [2024-10-07 11:37:32.496664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.981 [2024-10-07 11:37:32.496737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:50.981 [2024-10-07 11:37:32.496765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.141 ms 00:26:50.981 [2024-10-07 11:37:32.496776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.981 [2024-10-07 11:37:32.616433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.981 [2024-10-07 11:37:32.616497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:50.981 [2024-10-07 11:37:32.616522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 119.794 ms 00:26:50.981 [2024-10-07 11:37:32.616534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.981 [2024-10-07 11:37:32.654061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.981 [2024-10-07 11:37:32.654109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:50.981 [2024-10-07 11:37:32.654124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.569 ms 00:26:50.981 [2024-10-07 11:37:32.654134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.981 [2024-10-07 11:37:32.689447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.981 [2024-10-07 11:37:32.689491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:50.981 [2024-10-07 11:37:32.689506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.328 ms 00:26:50.981 [2024-10-07 11:37:32.689516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.242 [2024-10-07 11:37:32.725519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.242 [2024-10-07 11:37:32.725562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:51.242 [2024-10-07 11:37:32.725577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.023 ms 00:26:51.242 [2024-10-07 11:37:32.725587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.242 [2024-10-07 11:37:32.761649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.242 [2024-10-07 11:37:32.761695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:51.242 [2024-10-07 11:37:32.761710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.040 ms 00:26:51.242 [2024-10-07 11:37:32.761720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.242 [2024-10-07 11:37:32.761766] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:51.242 [2024-10-07 11:37:32.761784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 107776 / 261120 wr_cnt: 1 state: open 00:26:51.242 [2024-10-07 11:37:32.761797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:51.242 [2024-10-07 11:37:32.761809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:51.242 [2024-10-07 11:37:32.761820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:51.242 [2024-10-07 11:37:32.761831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:51.242 [2024-10-07 11:37:32.761841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:51.242 [2024-10-07 11:37:32.761852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:51.242 [2024-10-07 11:37:32.761863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:51.242 [2024-10-07 11:37:32.761874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:51.242 [2024-10-07 11:37:32.761885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:51.242 [2024-10-07 11:37:32.761895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:51.242 [2024-10-07 11:37:32.761906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:51.242 [2024-10-07 11:37:32.761917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:51.242 [2024-10-07 11:37:32.761928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:51.242 [2024-10-07 11:37:32.761939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:51.242 [2024-10-07 11:37:32.761949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:51.242 [2024-10-07 11:37:32.761960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:51.242 [2024-10-07 11:37:32.761970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:51.242 [2024-10-07 11:37:32.761981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:51.242 [2024-10-07 11:37:32.761991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:51.242 [2024-10-07 11:37:32.762002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:51.242 [2024-10-07 11:37:32.762012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:51.242 [2024-10-07 11:37:32.762022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:51.242 [2024-10-07 11:37:32.762032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:51.242 [2024-10-07 11:37:32.762045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:51.242 [2024-10-07 11:37:32.762055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:51.242 [2024-10-07 11:37:32.762066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:51.242 [2024-10-07 11:37:32.762076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:51.242 [2024-10-07 11:37:32.762086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:51.242 [2024-10-07 11:37:32.762100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:51.242 [2024-10-07 11:37:32.762111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:51.242 [2024-10-07 11:37:32.762122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:51.242 [2024-10-07 11:37:32.762133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:51.242 [2024-10-07 11:37:32.762143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:51.242 [2024-10-07 11:37:32.762154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:51.242 [2024-10-07 11:37:32.762164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:51.243 [2024-10-07 11:37:32.762175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:51.243 [2024-10-07 11:37:32.762185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:51.243 [2024-10-07 11:37:32.762196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:51.243 [2024-10-07 11:37:32.762206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:51.243 [2024-10-07 11:37:32.762217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:51.243 [2024-10-07 11:37:32.762227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:51.243 [2024-10-07 11:37:32.762237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:51.243 [2024-10-07 11:37:32.762248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:51.243 [2024-10-07 11:37:32.762258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:51.243 [2024-10-07 11:37:32.762268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:51.243 [2024-10-07 11:37:32.762279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:51.243 [2024-10-07 11:37:32.762297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:51.243 [2024-10-07 11:37:32.762308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:51.243 [2024-10-07 11:37:32.762318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:51.243 [2024-10-07 11:37:32.762329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:51.243 [2024-10-07 11:37:32.762339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:51.243 [2024-10-07 11:37:32.762350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:51.243 [2024-10-07 11:37:32.762360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:51.243 [2024-10-07 11:37:32.762371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:51.243 [2024-10-07 11:37:32.762381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:51.243 [2024-10-07 11:37:32.762392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:51.243 [2024-10-07 11:37:32.762402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:51.243 [2024-10-07 11:37:32.762413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:51.243 [2024-10-07 11:37:32.762423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:51.243 [2024-10-07 11:37:32.762434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:51.243 [2024-10-07 11:37:32.762445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:51.243 [2024-10-07 11:37:32.762455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:51.243 [2024-10-07 11:37:32.762465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:51.243 [2024-10-07 11:37:32.762476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:51.243 [2024-10-07 11:37:32.762487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:51.243 [2024-10-07 11:37:32.762498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:51.243 [2024-10-07 11:37:32.762508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:51.243 [2024-10-07 11:37:32.762518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:51.243 [2024-10-07 11:37:32.762529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:51.243 [2024-10-07 11:37:32.762539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:51.243 [2024-10-07 11:37:32.762549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:51.243 [2024-10-07 11:37:32.762560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:51.243 [2024-10-07 11:37:32.762570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:51.243 [2024-10-07 11:37:32.762580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:51.243 [2024-10-07 11:37:32.762590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:51.243 [2024-10-07 11:37:32.762601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:51.243 [2024-10-07 11:37:32.762611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:51.243 [2024-10-07 11:37:32.762621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:51.243 [2024-10-07 11:37:32.762632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:51.243 [2024-10-07 11:37:32.762642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:51.243 [2024-10-07 11:37:32.762653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:51.243 [2024-10-07 11:37:32.762663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:51.243 [2024-10-07 11:37:32.762673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:51.243 [2024-10-07 11:37:32.762684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:51.243 [2024-10-07 11:37:32.762694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:51.243 [2024-10-07 11:37:32.762704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:51.243 [2024-10-07 11:37:32.762714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:51.243 [2024-10-07 11:37:32.762724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:51.243 [2024-10-07 11:37:32.762735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:51.243 [2024-10-07 11:37:32.762753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:51.243 [2024-10-07 11:37:32.762763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:51.243 [2024-10-07 11:37:32.762774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:51.243 [2024-10-07 11:37:32.762785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:51.243 [2024-10-07 11:37:32.762797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:51.243 [2024-10-07 11:37:32.762807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:51.243 [2024-10-07 11:37:32.762818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:51.243 [2024-10-07 11:37:32.762829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:51.243 [2024-10-07 11:37:32.762839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:51.243 [2024-10-07 11:37:32.762850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:51.243 [2024-10-07 11:37:32.762868] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:51.243 [2024-10-07 11:37:32.762878] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b8292fed-22fb-4cec-9b97-a2c299f43dd8 00:26:51.243 [2024-10-07 11:37:32.762895] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 107776 00:26:51.243 [2024-10-07 11:37:32.762905] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 108736 00:26:51.243 [2024-10-07 11:37:32.762915] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 107776 00:26:51.243 [2024-10-07 11:37:32.762926] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0089 00:26:51.243 [2024-10-07 11:37:32.762936] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:51.243 [2024-10-07 11:37:32.762946] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:51.243 [2024-10-07 11:37:32.762956] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:51.243 [2024-10-07 11:37:32.762965] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:51.243 [2024-10-07 11:37:32.762974] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:51.243 [2024-10-07 11:37:32.762984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.243 [2024-10-07 11:37:32.763006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:51.243 [2024-10-07 11:37:32.763017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.221 ms 00:26:51.243 [2024-10-07 11:37:32.763027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.243 [2024-10-07 11:37:32.783426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.243 [2024-10-07 11:37:32.783485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:51.243 [2024-10-07 11:37:32.783500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.392 ms 00:26:51.243 [2024-10-07 11:37:32.783512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.243 [2024-10-07 11:37:32.784018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.243 [2024-10-07 11:37:32.784037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:51.243 [2024-10-07 11:37:32.784048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.470 ms 00:26:51.243 [2024-10-07 11:37:32.784065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.243 [2024-10-07 11:37:32.828245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:51.243 [2024-10-07 11:37:32.828299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:51.243 [2024-10-07 11:37:32.828330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:51.243 [2024-10-07 11:37:32.828341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.243 [2024-10-07 11:37:32.828451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:51.243 [2024-10-07 11:37:32.828466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:51.243 [2024-10-07 11:37:32.828478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:51.243 [2024-10-07 11:37:32.828492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.243 [2024-10-07 11:37:32.828575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:51.243 [2024-10-07 11:37:32.828590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:51.243 [2024-10-07 11:37:32.828600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:51.243 [2024-10-07 11:37:32.828610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.243 [2024-10-07 11:37:32.828628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:51.243 [2024-10-07 11:37:32.828639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:51.243 [2024-10-07 11:37:32.828649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:51.243 [2024-10-07 11:37:32.828659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.531 [2024-10-07 11:37:32.952675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:51.531 [2024-10-07 11:37:32.952736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:51.531 [2024-10-07 11:37:32.952768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:51.531 [2024-10-07 11:37:32.952779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.531 [2024-10-07 11:37:33.054146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:51.531 [2024-10-07 11:37:33.054204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:51.531 [2024-10-07 11:37:33.054219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:51.531 [2024-10-07 11:37:33.054236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.531 [2024-10-07 11:37:33.054338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:51.531 [2024-10-07 11:37:33.054351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:51.531 [2024-10-07 11:37:33.054362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:51.531 [2024-10-07 11:37:33.054372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.531 [2024-10-07 11:37:33.054425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:51.531 [2024-10-07 11:37:33.054439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:51.531 [2024-10-07 11:37:33.054449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:51.531 [2024-10-07 11:37:33.054459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.531 [2024-10-07 11:37:33.054569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:51.531 [2024-10-07 11:37:33.054583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:51.531 [2024-10-07 11:37:33.054593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:51.531 [2024-10-07 11:37:33.054603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.531 [2024-10-07 11:37:33.054638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:51.531 [2024-10-07 11:37:33.054650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:51.531 [2024-10-07 11:37:33.054661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:51.531 [2024-10-07 11:37:33.054671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.531 [2024-10-07 11:37:33.054711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:51.531 [2024-10-07 11:37:33.054724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:51.531 [2024-10-07 11:37:33.054734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:51.531 [2024-10-07 11:37:33.054756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.531 [2024-10-07 11:37:33.054803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:51.531 [2024-10-07 11:37:33.054815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:51.531 [2024-10-07 11:37:33.054825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:51.531 [2024-10-07 11:37:33.054835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.531 [2024-10-07 11:37:33.054950] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 667.377 ms, result 0 00:26:53.438 00:26:53.438 00:26:53.439 11:37:34 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:26:53.439 [2024-10-07 11:37:34.797088] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:26:53.439 [2024-10-07 11:37:34.797237] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78481 ] 00:26:53.439 [2024-10-07 11:37:34.969780] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:53.697 [2024-10-07 11:37:35.187131] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:26:53.956 [2024-10-07 11:37:35.550666] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:53.956 [2024-10-07 11:37:35.550737] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:54.215 [2024-10-07 11:37:35.711829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:54.215 [2024-10-07 11:37:35.711883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:54.215 [2024-10-07 11:37:35.711898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:54.215 [2024-10-07 11:37:35.711909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.215 [2024-10-07 11:37:35.711960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:54.215 [2024-10-07 11:37:35.711973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:54.215 [2024-10-07 11:37:35.711983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:26:54.215 [2024-10-07 11:37:35.711993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.215 [2024-10-07 11:37:35.712014] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:54.215 [2024-10-07 11:37:35.712970] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:54.215 [2024-10-07 11:37:35.712998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:54.215 [2024-10-07 11:37:35.713009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:54.215 [2024-10-07 11:37:35.713021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.990 ms 00:26:54.215 [2024-10-07 11:37:35.713031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.215 [2024-10-07 11:37:35.714476] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:54.215 [2024-10-07 11:37:35.733808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:54.215 [2024-10-07 11:37:35.733850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:54.215 [2024-10-07 11:37:35.733864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.364 ms 00:26:54.215 [2024-10-07 11:37:35.733875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.215 [2024-10-07 11:37:35.733950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:54.215 [2024-10-07 11:37:35.733963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:54.215 [2024-10-07 11:37:35.733974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:26:54.215 [2024-10-07 11:37:35.733984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.215 [2024-10-07 11:37:35.740812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:54.215 [2024-10-07 11:37:35.740841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:54.215 [2024-10-07 11:37:35.740854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.769 ms 00:26:54.215 [2024-10-07 11:37:35.740865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.215 [2024-10-07 11:37:35.740944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:54.215 [2024-10-07 11:37:35.740957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:54.215 [2024-10-07 11:37:35.740967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:26:54.215 [2024-10-07 11:37:35.740978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.215 [2024-10-07 11:37:35.741025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:54.215 [2024-10-07 11:37:35.741037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:54.215 [2024-10-07 11:37:35.741047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:26:54.215 [2024-10-07 11:37:35.741058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.215 [2024-10-07 11:37:35.741082] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:54.215 [2024-10-07 11:37:35.745801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:54.215 [2024-10-07 11:37:35.745830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:54.215 [2024-10-07 11:37:35.745842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.732 ms 00:26:54.215 [2024-10-07 11:37:35.745852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.215 [2024-10-07 11:37:35.745881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:54.215 [2024-10-07 11:37:35.745892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:54.215 [2024-10-07 11:37:35.745902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:54.215 [2024-10-07 11:37:35.745911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.215 [2024-10-07 11:37:35.745967] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:54.215 [2024-10-07 11:37:35.745990] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:54.215 [2024-10-07 11:37:35.746025] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:54.215 [2024-10-07 11:37:35.746042] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:54.215 [2024-10-07 11:37:35.746143] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:54.215 [2024-10-07 11:37:35.746157] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:54.215 [2024-10-07 11:37:35.746170] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:54.215 [2024-10-07 11:37:35.746186] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:54.215 [2024-10-07 11:37:35.746199] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:54.215 [2024-10-07 11:37:35.746210] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:54.215 [2024-10-07 11:37:35.746220] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:54.216 [2024-10-07 11:37:35.746230] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:54.216 [2024-10-07 11:37:35.746239] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:54.216 [2024-10-07 11:37:35.746250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:54.216 [2024-10-07 11:37:35.746260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:54.216 [2024-10-07 11:37:35.746271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:26:54.216 [2024-10-07 11:37:35.746281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.216 [2024-10-07 11:37:35.746360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:54.216 [2024-10-07 11:37:35.746374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:54.216 [2024-10-07 11:37:35.746384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:26:54.216 [2024-10-07 11:37:35.746394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.216 [2024-10-07 11:37:35.746489] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:54.216 [2024-10-07 11:37:35.746504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:54.216 [2024-10-07 11:37:35.746515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:54.216 [2024-10-07 11:37:35.746525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:54.216 [2024-10-07 11:37:35.746535] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:54.216 [2024-10-07 11:37:35.746545] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:54.216 [2024-10-07 11:37:35.746555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:54.216 [2024-10-07 11:37:35.746564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:54.216 [2024-10-07 11:37:35.746574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:54.216 [2024-10-07 11:37:35.746585] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:54.216 [2024-10-07 11:37:35.746595] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:54.216 [2024-10-07 11:37:35.746604] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:54.216 [2024-10-07 11:37:35.746613] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:54.216 [2024-10-07 11:37:35.746632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:54.216 [2024-10-07 11:37:35.746642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:54.216 [2024-10-07 11:37:35.746651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:54.216 [2024-10-07 11:37:35.746661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:54.216 [2024-10-07 11:37:35.746670] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:54.216 [2024-10-07 11:37:35.746679] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:54.216 [2024-10-07 11:37:35.746688] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:54.216 [2024-10-07 11:37:35.746698] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:54.216 [2024-10-07 11:37:35.746707] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:54.216 [2024-10-07 11:37:35.746716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:54.216 [2024-10-07 11:37:35.746725] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:54.216 [2024-10-07 11:37:35.746735] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:54.216 [2024-10-07 11:37:35.746759] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:54.216 [2024-10-07 11:37:35.746769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:54.216 [2024-10-07 11:37:35.746778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:54.216 [2024-10-07 11:37:35.746787] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:54.216 [2024-10-07 11:37:35.746797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:54.216 [2024-10-07 11:37:35.746806] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:54.216 [2024-10-07 11:37:35.746815] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:54.216 [2024-10-07 11:37:35.746825] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:54.216 [2024-10-07 11:37:35.746834] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:54.216 [2024-10-07 11:37:35.746843] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:54.216 [2024-10-07 11:37:35.746852] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:54.216 [2024-10-07 11:37:35.746861] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:54.216 [2024-10-07 11:37:35.746870] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:54.216 [2024-10-07 11:37:35.746879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:54.216 [2024-10-07 11:37:35.746888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:54.216 [2024-10-07 11:37:35.746898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:54.216 [2024-10-07 11:37:35.746908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:54.216 [2024-10-07 11:37:35.746917] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:54.216 [2024-10-07 11:37:35.746926] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:54.216 [2024-10-07 11:37:35.746936] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:54.216 [2024-10-07 11:37:35.746949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:54.216 [2024-10-07 11:37:35.746959] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:54.216 [2024-10-07 11:37:35.746970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:54.216 [2024-10-07 11:37:35.746979] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:54.216 [2024-10-07 11:37:35.746989] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:54.216 [2024-10-07 11:37:35.746997] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:54.216 [2024-10-07 11:37:35.747007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:54.216 [2024-10-07 11:37:35.747016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:54.216 [2024-10-07 11:37:35.747027] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:54.216 [2024-10-07 11:37:35.747040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:54.216 [2024-10-07 11:37:35.747051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:54.216 [2024-10-07 11:37:35.747062] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:54.216 [2024-10-07 11:37:35.747072] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:54.216 [2024-10-07 11:37:35.747083] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:54.216 [2024-10-07 11:37:35.747093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:54.216 [2024-10-07 11:37:35.747103] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:54.216 [2024-10-07 11:37:35.747113] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:54.216 [2024-10-07 11:37:35.747124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:54.216 [2024-10-07 11:37:35.747134] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:54.216 [2024-10-07 11:37:35.747144] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:54.216 [2024-10-07 11:37:35.747154] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:54.216 [2024-10-07 11:37:35.747164] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:54.216 [2024-10-07 11:37:35.747174] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:54.216 [2024-10-07 11:37:35.747184] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:54.216 [2024-10-07 11:37:35.747194] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:54.216 [2024-10-07 11:37:35.747205] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:54.216 [2024-10-07 11:37:35.747216] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:54.216 [2024-10-07 11:37:35.747226] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:54.216 [2024-10-07 11:37:35.747237] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:54.216 [2024-10-07 11:37:35.747247] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:54.216 [2024-10-07 11:37:35.747258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:54.216 [2024-10-07 11:37:35.747268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:54.216 [2024-10-07 11:37:35.747278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.824 ms 00:26:54.216 [2024-10-07 11:37:35.747288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.216 [2024-10-07 11:37:35.794596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:54.216 [2024-10-07 11:37:35.794640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:54.216 [2024-10-07 11:37:35.794655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.333 ms 00:26:54.216 [2024-10-07 11:37:35.794666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.216 [2024-10-07 11:37:35.794764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:54.216 [2024-10-07 11:37:35.794776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:54.216 [2024-10-07 11:37:35.794788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:26:54.216 [2024-10-07 11:37:35.794798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.216 [2024-10-07 11:37:35.843244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:54.216 [2024-10-07 11:37:35.843286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:54.216 [2024-10-07 11:37:35.843303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.460 ms 00:26:54.216 [2024-10-07 11:37:35.843314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.216 [2024-10-07 11:37:35.843354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:54.216 [2024-10-07 11:37:35.843365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:54.216 [2024-10-07 11:37:35.843376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:54.217 [2024-10-07 11:37:35.843386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.217 [2024-10-07 11:37:35.843893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:54.217 [2024-10-07 11:37:35.843915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:54.217 [2024-10-07 11:37:35.843927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.430 ms 00:26:54.217 [2024-10-07 11:37:35.843944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.217 [2024-10-07 11:37:35.844063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:54.217 [2024-10-07 11:37:35.844077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:54.217 [2024-10-07 11:37:35.844088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:26:54.217 [2024-10-07 11:37:35.844098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.217 [2024-10-07 11:37:35.862152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:54.217 [2024-10-07 11:37:35.862187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:54.217 [2024-10-07 11:37:35.862200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.062 ms 00:26:54.217 [2024-10-07 11:37:35.862210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.217 [2024-10-07 11:37:35.881381] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:26:54.217 [2024-10-07 11:37:35.881419] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:54.217 [2024-10-07 11:37:35.881449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:54.217 [2024-10-07 11:37:35.881460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:54.217 [2024-10-07 11:37:35.881471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.121 ms 00:26:54.217 [2024-10-07 11:37:35.881481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.217 [2024-10-07 11:37:35.911314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:54.217 [2024-10-07 11:37:35.911369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:54.217 [2024-10-07 11:37:35.911383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.840 ms 00:26:54.217 [2024-10-07 11:37:35.911394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.476 [2024-10-07 11:37:35.929260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:54.476 [2024-10-07 11:37:35.929297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:54.476 [2024-10-07 11:37:35.929311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.845 ms 00:26:54.476 [2024-10-07 11:37:35.929321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.476 [2024-10-07 11:37:35.947262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:54.476 [2024-10-07 11:37:35.947300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:54.476 [2024-10-07 11:37:35.947314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.931 ms 00:26:54.476 [2024-10-07 11:37:35.947324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.476 [2024-10-07 11:37:35.948148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:54.476 [2024-10-07 11:37:35.948179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:54.476 [2024-10-07 11:37:35.948191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.705 ms 00:26:54.476 [2024-10-07 11:37:35.948202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.476 [2024-10-07 11:37:36.033036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:54.476 [2024-10-07 11:37:36.033098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:54.476 [2024-10-07 11:37:36.033115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.949 ms 00:26:54.476 [2024-10-07 11:37:36.033126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.476 [2024-10-07 11:37:36.044399] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:54.476 [2024-10-07 11:37:36.047506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:54.476 [2024-10-07 11:37:36.047539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:54.476 [2024-10-07 11:37:36.047553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.348 ms 00:26:54.476 [2024-10-07 11:37:36.047569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.476 [2024-10-07 11:37:36.047667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:54.476 [2024-10-07 11:37:36.047680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:54.476 [2024-10-07 11:37:36.047691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:54.476 [2024-10-07 11:37:36.047701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.476 [2024-10-07 11:37:36.049225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:54.476 [2024-10-07 11:37:36.049263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:54.476 [2024-10-07 11:37:36.049275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.451 ms 00:26:54.476 [2024-10-07 11:37:36.049286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.476 [2024-10-07 11:37:36.049329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:54.476 [2024-10-07 11:37:36.049340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:54.476 [2024-10-07 11:37:36.049351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:54.476 [2024-10-07 11:37:36.049361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.476 [2024-10-07 11:37:36.049398] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:54.476 [2024-10-07 11:37:36.049410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:54.476 [2024-10-07 11:37:36.049420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:54.476 [2024-10-07 11:37:36.049430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:26:54.476 [2024-10-07 11:37:36.049444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.476 [2024-10-07 11:37:36.086103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:54.476 [2024-10-07 11:37:36.086147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:54.476 [2024-10-07 11:37:36.086164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.696 ms 00:26:54.476 [2024-10-07 11:37:36.086178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.476 [2024-10-07 11:37:36.086265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:54.476 [2024-10-07 11:37:36.086280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:54.476 [2024-10-07 11:37:36.086302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:26:54.476 [2024-10-07 11:37:36.086313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.476 [2024-10-07 11:37:36.087676] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 375.972 ms, result 0 00:26:55.854  [2024-10-07T11:37:38.502Z] Copying: 27/1024 [MB] (27 MBps) [2024-10-07T11:37:39.439Z] Copying: 55/1024 [MB] (28 MBps) [2024-10-07T11:37:40.375Z] Copying: 85/1024 [MB] (29 MBps) [2024-10-07T11:37:41.753Z] Copying: 114/1024 [MB] (29 MBps) [2024-10-07T11:37:42.326Z] Copying: 143/1024 [MB] (28 MBps) [2024-10-07T11:37:43.702Z] Copying: 170/1024 [MB] (27 MBps) [2024-10-07T11:37:44.639Z] Copying: 198/1024 [MB] (27 MBps) [2024-10-07T11:37:45.603Z] Copying: 226/1024 [MB] (27 MBps) [2024-10-07T11:37:46.537Z] Copying: 254/1024 [MB] (28 MBps) [2024-10-07T11:37:47.472Z] Copying: 284/1024 [MB] (29 MBps) [2024-10-07T11:37:48.408Z] Copying: 312/1024 [MB] (28 MBps) [2024-10-07T11:37:49.346Z] Copying: 340/1024 [MB] (28 MBps) [2024-10-07T11:37:50.726Z] Copying: 369/1024 [MB] (28 MBps) [2024-10-07T11:37:51.663Z] Copying: 397/1024 [MB] (27 MBps) [2024-10-07T11:37:52.600Z] Copying: 425/1024 [MB] (28 MBps) [2024-10-07T11:37:53.536Z] Copying: 452/1024 [MB] (27 MBps) [2024-10-07T11:37:54.473Z] Copying: 480/1024 [MB] (28 MBps) [2024-10-07T11:37:55.408Z] Copying: 508/1024 [MB] (28 MBps) [2024-10-07T11:37:56.343Z] Copying: 534/1024 [MB] (26 MBps) [2024-10-07T11:37:57.720Z] Copying: 560/1024 [MB] (25 MBps) [2024-10-07T11:37:58.286Z] Copying: 587/1024 [MB] (27 MBps) [2024-10-07T11:37:59.659Z] Copying: 614/1024 [MB] (27 MBps) [2024-10-07T11:38:00.607Z] Copying: 641/1024 [MB] (26 MBps) [2024-10-07T11:38:01.543Z] Copying: 668/1024 [MB] (26 MBps) [2024-10-07T11:38:02.479Z] Copying: 697/1024 [MB] (28 MBps) [2024-10-07T11:38:03.415Z] Copying: 724/1024 [MB] (27 MBps) [2024-10-07T11:38:04.351Z] Copying: 753/1024 [MB] (28 MBps) [2024-10-07T11:38:05.286Z] Copying: 781/1024 [MB] (27 MBps) [2024-10-07T11:38:06.661Z] Copying: 807/1024 [MB] (26 MBps) [2024-10-07T11:38:07.601Z] Copying: 835/1024 [MB] (27 MBps) [2024-10-07T11:38:08.537Z] Copying: 862/1024 [MB] (27 MBps) [2024-10-07T11:38:09.474Z] Copying: 889/1024 [MB] (26 MBps) [2024-10-07T11:38:10.507Z] Copying: 916/1024 [MB] (27 MBps) [2024-10-07T11:38:11.441Z] Copying: 944/1024 [MB] (27 MBps) [2024-10-07T11:38:12.377Z] Copying: 971/1024 [MB] (27 MBps) [2024-10-07T11:38:13.313Z] Copying: 998/1024 [MB] (27 MBps) [2024-10-07T11:38:13.573Z] Copying: 1024/1024 [MB] (average 27 MBps)[2024-10-07 11:38:13.431081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.862 [2024-10-07 11:38:13.431179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:31.862 [2024-10-07 11:38:13.431207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:31.862 [2024-10-07 11:38:13.431226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.862 [2024-10-07 11:38:13.431265] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:31.862 [2024-10-07 11:38:13.438983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.862 [2024-10-07 11:38:13.439033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:31.862 [2024-10-07 11:38:13.439052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.697 ms 00:27:31.862 [2024-10-07 11:38:13.439073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.862 [2024-10-07 11:38:13.439342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.862 [2024-10-07 11:38:13.439360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:31.862 [2024-10-07 11:38:13.439375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.227 ms 00:27:31.862 [2024-10-07 11:38:13.439388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.862 [2024-10-07 11:38:13.445155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.862 [2024-10-07 11:38:13.445206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:31.862 [2024-10-07 11:38:13.445224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.753 ms 00:27:31.862 [2024-10-07 11:38:13.445240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.862 [2024-10-07 11:38:13.450939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.862 [2024-10-07 11:38:13.450996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:31.862 [2024-10-07 11:38:13.451024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.662 ms 00:27:31.862 [2024-10-07 11:38:13.451034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.862 [2024-10-07 11:38:13.487842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.862 [2024-10-07 11:38:13.487879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:31.862 [2024-10-07 11:38:13.487908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.821 ms 00:27:31.862 [2024-10-07 11:38:13.487918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.862 [2024-10-07 11:38:13.509124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.862 [2024-10-07 11:38:13.509163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:31.862 [2024-10-07 11:38:13.509177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.200 ms 00:27:31.862 [2024-10-07 11:38:13.509187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.122 [2024-10-07 11:38:13.647563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.122 [2024-10-07 11:38:13.647620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:32.122 [2024-10-07 11:38:13.647642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 138.557 ms 00:27:32.123 [2024-10-07 11:38:13.647654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.123 [2024-10-07 11:38:13.684255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.123 [2024-10-07 11:38:13.684293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:32.123 [2024-10-07 11:38:13.684306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.642 ms 00:27:32.123 [2024-10-07 11:38:13.684316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.123 [2024-10-07 11:38:13.720318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.123 [2024-10-07 11:38:13.720353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:32.123 [2024-10-07 11:38:13.720381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.007 ms 00:27:32.123 [2024-10-07 11:38:13.720392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.123 [2024-10-07 11:38:13.755693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.123 [2024-10-07 11:38:13.755729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:32.123 [2024-10-07 11:38:13.755749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.305 ms 00:27:32.123 [2024-10-07 11:38:13.755760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.123 [2024-10-07 11:38:13.790703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.123 [2024-10-07 11:38:13.790748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:32.123 [2024-10-07 11:38:13.790762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.908 ms 00:27:32.123 [2024-10-07 11:38:13.790772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.123 [2024-10-07 11:38:13.790809] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:32.123 [2024-10-07 11:38:13.790825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:27:32.123 [2024-10-07 11:38:13.790837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:32.123 [2024-10-07 11:38:13.790849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:32.123 [2024-10-07 11:38:13.790860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:32.123 [2024-10-07 11:38:13.790871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:32.123 [2024-10-07 11:38:13.790882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:32.123 [2024-10-07 11:38:13.790892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:32.123 [2024-10-07 11:38:13.790903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:32.123 [2024-10-07 11:38:13.790913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:32.123 [2024-10-07 11:38:13.790924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:32.123 [2024-10-07 11:38:13.790935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:32.123 [2024-10-07 11:38:13.790946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:32.123 [2024-10-07 11:38:13.790956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:32.123 [2024-10-07 11:38:13.790966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:32.123 [2024-10-07 11:38:13.790977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:32.123 [2024-10-07 11:38:13.790988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:32.123 [2024-10-07 11:38:13.790998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:32.123 [2024-10-07 11:38:13.791009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:32.123 [2024-10-07 11:38:13.791019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:32.123 [2024-10-07 11:38:13.791029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:32.123 [2024-10-07 11:38:13.791040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:32.123 [2024-10-07 11:38:13.791050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:32.123 [2024-10-07 11:38:13.791061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:32.123 [2024-10-07 11:38:13.791071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:32.123 [2024-10-07 11:38:13.791081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:32.123 [2024-10-07 11:38:13.791092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:32.123 [2024-10-07 11:38:13.791102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:32.123 [2024-10-07 11:38:13.791113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:32.123 [2024-10-07 11:38:13.791123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:32.123 [2024-10-07 11:38:13.791133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:32.123 [2024-10-07 11:38:13.791144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:32.123 [2024-10-07 11:38:13.791155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:32.123 [2024-10-07 11:38:13.791165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:32.123 [2024-10-07 11:38:13.791175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:32.123 [2024-10-07 11:38:13.791186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:32.123 [2024-10-07 11:38:13.791196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:32.123 [2024-10-07 11:38:13.791207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:32.123 [2024-10-07 11:38:13.791218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:32.123 [2024-10-07 11:38:13.791228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:32.123 [2024-10-07 11:38:13.791238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:32.123 [2024-10-07 11:38:13.791248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:32.123 [2024-10-07 11:38:13.791259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:32.123 [2024-10-07 11:38:13.791270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:32.123 [2024-10-07 11:38:13.791280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:32.123 [2024-10-07 11:38:13.791290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:32.123 [2024-10-07 11:38:13.791301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:32.123 [2024-10-07 11:38:13.791313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:32.123 [2024-10-07 11:38:13.791323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:32.123 [2024-10-07 11:38:13.791333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:32.123 [2024-10-07 11:38:13.791344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:32.123 [2024-10-07 11:38:13.791354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:32.123 [2024-10-07 11:38:13.791365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:32.123 [2024-10-07 11:38:13.791376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:32.123 [2024-10-07 11:38:13.791386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:32.123 [2024-10-07 11:38:13.791397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:32.123 [2024-10-07 11:38:13.791409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:32.123 [2024-10-07 11:38:13.791419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:32.123 [2024-10-07 11:38:13.791430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:32.123 [2024-10-07 11:38:13.791440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:32.123 [2024-10-07 11:38:13.791450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:32.123 [2024-10-07 11:38:13.791460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:32.123 [2024-10-07 11:38:13.791471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:32.123 [2024-10-07 11:38:13.791481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:32.123 [2024-10-07 11:38:13.791492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:32.123 [2024-10-07 11:38:13.791502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:32.123 [2024-10-07 11:38:13.791513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:32.123 [2024-10-07 11:38:13.791523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:32.123 [2024-10-07 11:38:13.791534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:32.123 [2024-10-07 11:38:13.791545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:32.123 [2024-10-07 11:38:13.791555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:32.123 [2024-10-07 11:38:13.791566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:32.123 [2024-10-07 11:38:13.791576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:32.123 [2024-10-07 11:38:13.791586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:32.123 [2024-10-07 11:38:13.791596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:32.124 [2024-10-07 11:38:13.791606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:32.124 [2024-10-07 11:38:13.791617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:32.124 [2024-10-07 11:38:13.791627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:32.124 [2024-10-07 11:38:13.791637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:32.124 [2024-10-07 11:38:13.791648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:32.124 [2024-10-07 11:38:13.791659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:32.124 [2024-10-07 11:38:13.791669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:32.124 [2024-10-07 11:38:13.791679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:32.124 [2024-10-07 11:38:13.791689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:32.124 [2024-10-07 11:38:13.791700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:32.124 [2024-10-07 11:38:13.791710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:32.124 [2024-10-07 11:38:13.791720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:32.124 [2024-10-07 11:38:13.791730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:32.124 [2024-10-07 11:38:13.791749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:32.124 [2024-10-07 11:38:13.791760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:32.124 [2024-10-07 11:38:13.791771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:32.124 [2024-10-07 11:38:13.791782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:32.124 [2024-10-07 11:38:13.791792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:32.124 [2024-10-07 11:38:13.791803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:32.124 [2024-10-07 11:38:13.791813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:32.124 [2024-10-07 11:38:13.791825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:32.124 [2024-10-07 11:38:13.791835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:32.124 [2024-10-07 11:38:13.791846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:32.124 [2024-10-07 11:38:13.791857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:32.124 [2024-10-07 11:38:13.791868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:32.124 [2024-10-07 11:38:13.791879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:32.124 [2024-10-07 11:38:13.791896] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:32.124 [2024-10-07 11:38:13.791906] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b8292fed-22fb-4cec-9b97-a2c299f43dd8 00:27:32.124 [2024-10-07 11:38:13.791922] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:27:32.124 [2024-10-07 11:38:13.791932] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 24256 00:27:32.124 [2024-10-07 11:38:13.791942] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 23296 00:27:32.124 [2024-10-07 11:38:13.791953] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0412 00:27:32.124 [2024-10-07 11:38:13.791962] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:32.124 [2024-10-07 11:38:13.791972] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:32.124 [2024-10-07 11:38:13.791982] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:32.124 [2024-10-07 11:38:13.791992] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:32.124 [2024-10-07 11:38:13.792001] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:32.124 [2024-10-07 11:38:13.792010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.124 [2024-10-07 11:38:13.792021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:32.124 [2024-10-07 11:38:13.792041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.205 ms 00:27:32.124 [2024-10-07 11:38:13.792051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.124 [2024-10-07 11:38:13.811315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.124 [2024-10-07 11:38:13.811348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:32.124 [2024-10-07 11:38:13.811361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.260 ms 00:27:32.124 [2024-10-07 11:38:13.811371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.124 [2024-10-07 11:38:13.811895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.124 [2024-10-07 11:38:13.811916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:32.124 [2024-10-07 11:38:13.811928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.503 ms 00:27:32.124 [2024-10-07 11:38:13.811944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.383 [2024-10-07 11:38:13.856182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:32.383 [2024-10-07 11:38:13.856220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:32.383 [2024-10-07 11:38:13.856247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:32.383 [2024-10-07 11:38:13.856258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.383 [2024-10-07 11:38:13.856309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:32.383 [2024-10-07 11:38:13.856320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:32.383 [2024-10-07 11:38:13.856330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:32.383 [2024-10-07 11:38:13.856360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.383 [2024-10-07 11:38:13.856439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:32.383 [2024-10-07 11:38:13.856453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:32.383 [2024-10-07 11:38:13.856464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:32.383 [2024-10-07 11:38:13.856474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.383 [2024-10-07 11:38:13.856491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:32.383 [2024-10-07 11:38:13.856501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:32.383 [2024-10-07 11:38:13.856512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:32.383 [2024-10-07 11:38:13.856521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.383 [2024-10-07 11:38:13.981865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:32.383 [2024-10-07 11:38:13.981914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:32.383 [2024-10-07 11:38:13.981931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:32.383 [2024-10-07 11:38:13.981942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.383 [2024-10-07 11:38:14.083837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:32.383 [2024-10-07 11:38:14.083891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:32.383 [2024-10-07 11:38:14.083906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:32.383 [2024-10-07 11:38:14.083923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.383 [2024-10-07 11:38:14.084011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:32.383 [2024-10-07 11:38:14.084023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:32.383 [2024-10-07 11:38:14.084034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:32.383 [2024-10-07 11:38:14.084045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.383 [2024-10-07 11:38:14.084095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:32.383 [2024-10-07 11:38:14.084108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:32.383 [2024-10-07 11:38:14.084118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:32.383 [2024-10-07 11:38:14.084128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.383 [2024-10-07 11:38:14.084250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:32.383 [2024-10-07 11:38:14.084264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:32.383 [2024-10-07 11:38:14.084275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:32.383 [2024-10-07 11:38:14.084285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.383 [2024-10-07 11:38:14.084319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:32.383 [2024-10-07 11:38:14.084331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:32.383 [2024-10-07 11:38:14.084342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:32.383 [2024-10-07 11:38:14.084352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.383 [2024-10-07 11:38:14.084392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:32.383 [2024-10-07 11:38:14.084404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:32.383 [2024-10-07 11:38:14.084414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:32.383 [2024-10-07 11:38:14.084423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.383 [2024-10-07 11:38:14.084467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:32.383 [2024-10-07 11:38:14.084479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:32.383 [2024-10-07 11:38:14.084490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:32.383 [2024-10-07 11:38:14.084500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.383 [2024-10-07 11:38:14.084629] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 654.581 ms, result 0 00:27:33.761 00:27:33.761 00:27:33.761 11:38:15 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:35.667 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:27:35.667 11:38:17 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:27:35.667 11:38:17 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:27:35.667 11:38:17 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:27:35.667 11:38:17 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:35.667 11:38:17 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:35.667 11:38:17 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 76984 00:27:35.667 11:38:17 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 76984 ']' 00:27:35.667 11:38:17 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 76984 00:27:35.667 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (76984) - No such process 00:27:35.667 Process with pid 76984 is not found 00:27:35.667 11:38:17 ftl.ftl_restore -- common/autotest_common.sh@977 -- # echo 'Process with pid 76984 is not found' 00:27:35.667 11:38:17 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:27:35.667 Remove shared memory files 00:27:35.667 11:38:17 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:35.667 11:38:17 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:27:35.667 11:38:17 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:27:35.667 11:38:17 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:27:35.667 11:38:17 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:35.667 11:38:17 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:27:35.667 00:27:35.667 real 3m6.687s 00:27:35.667 user 2m54.224s 00:27:35.667 sys 0m14.007s 00:27:35.667 11:38:17 ftl.ftl_restore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:35.667 11:38:17 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:27:35.667 ************************************ 00:27:35.667 END TEST ftl_restore 00:27:35.667 ************************************ 00:27:35.667 11:38:17 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:27:35.667 11:38:17 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:27:35.667 11:38:17 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:35.667 11:38:17 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:35.667 ************************************ 00:27:35.667 START TEST ftl_dirty_shutdown 00:27:35.667 ************************************ 00:27:35.667 11:38:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:27:35.926 * Looking for test storage... 00:27:35.926 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:27:35.926 11:38:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:35.926 11:38:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:27:35.926 11:38:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:35.926 11:38:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:35.926 11:38:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:35.926 11:38:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:35.926 11:38:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:35.926 11:38:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:27:35.926 11:38:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:27:35.926 11:38:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:27:35.926 11:38:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:27:35.926 11:38:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:27:35.926 11:38:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:27:35.926 11:38:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:27:35.926 11:38:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:35.926 11:38:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:27:35.926 11:38:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:27:35.926 11:38:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:35.926 11:38:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:35.926 11:38:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:27:35.926 11:38:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:27:35.926 11:38:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:35.926 11:38:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:27:35.926 11:38:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:27:35.926 11:38:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:27:35.926 11:38:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:27:35.926 11:38:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:35.926 11:38:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:27:35.926 11:38:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:27:35.926 11:38:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:35.926 11:38:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:35.926 11:38:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:27:35.926 11:38:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:35.926 11:38:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:35.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:35.926 --rc genhtml_branch_coverage=1 00:27:35.926 --rc genhtml_function_coverage=1 00:27:35.926 --rc genhtml_legend=1 00:27:35.926 --rc geninfo_all_blocks=1 00:27:35.926 --rc geninfo_unexecuted_blocks=1 00:27:35.926 00:27:35.926 ' 00:27:35.926 11:38:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:35.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:35.926 --rc genhtml_branch_coverage=1 00:27:35.926 --rc genhtml_function_coverage=1 00:27:35.926 --rc genhtml_legend=1 00:27:35.926 --rc geninfo_all_blocks=1 00:27:35.926 --rc geninfo_unexecuted_blocks=1 00:27:35.926 00:27:35.926 ' 00:27:35.926 11:38:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:35.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:35.926 --rc genhtml_branch_coverage=1 00:27:35.926 --rc genhtml_function_coverage=1 00:27:35.926 --rc genhtml_legend=1 00:27:35.926 --rc geninfo_all_blocks=1 00:27:35.926 --rc geninfo_unexecuted_blocks=1 00:27:35.926 00:27:35.926 ' 00:27:35.926 11:38:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:35.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:35.926 --rc genhtml_branch_coverage=1 00:27:35.926 --rc genhtml_function_coverage=1 00:27:35.926 --rc genhtml_legend=1 00:27:35.926 --rc geninfo_all_blocks=1 00:27:35.926 --rc geninfo_unexecuted_blocks=1 00:27:35.926 00:27:35.926 ' 00:27:35.926 11:38:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:27:35.926 11:38:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:27:35.926 11:38:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:27:35.926 11:38:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:27:35.926 11:38:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:27:35.926 11:38:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:35.926 11:38:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:35.926 11:38:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:27:35.926 11:38:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:27:35.926 11:38:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:35.926 11:38:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:35.926 11:38:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:27:35.926 11:38:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:27:35.926 11:38:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:35.926 11:38:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:35.927 11:38:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:27:35.927 11:38:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:27:35.927 11:38:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:35.927 11:38:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:35.927 11:38:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:27:35.927 11:38:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:27:35.927 11:38:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:35.927 11:38:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:35.927 11:38:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:35.927 11:38:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:35.927 11:38:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:27:35.927 11:38:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:27:35.927 11:38:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:35.927 11:38:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:35.927 11:38:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:35.927 11:38:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:35.927 11:38:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:27:35.927 11:38:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:27:35.927 11:38:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:27:35.927 11:38:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:27:35.927 11:38:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:27:35.927 11:38:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:27:35.927 11:38:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:27:35.927 11:38:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:27:35.927 11:38:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:27:35.927 11:38:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:27:35.927 11:38:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:27:35.927 11:38:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=78978 00:27:35.927 11:38:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 78978 00:27:35.927 11:38:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:27:35.927 11:38:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@831 -- # '[' -z 78978 ']' 00:27:35.927 11:38:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:35.927 11:38:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:35.927 11:38:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:35.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:35.927 11:38:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:35.927 11:38:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:36.185 [2024-10-07 11:38:17.685994] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:27:36.185 [2024-10-07 11:38:17.686137] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78978 ] 00:27:36.185 [2024-10-07 11:38:17.858268] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:36.444 [2024-10-07 11:38:18.076835] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:27:37.382 11:38:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:37.382 11:38:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # return 0 00:27:37.382 11:38:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:27:37.382 11:38:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:27:37.382 11:38:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:27:37.382 11:38:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:27:37.382 11:38:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:27:37.382 11:38:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:27:37.641 11:38:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:27:37.641 11:38:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:27:37.641 11:38:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:27:37.641 11:38:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:27:37.641 11:38:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:27:37.641 11:38:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:27:37.641 11:38:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:27:37.641 11:38:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:27:37.900 11:38:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:27:37.900 { 00:27:37.900 "name": "nvme0n1", 00:27:37.900 "aliases": [ 00:27:37.900 "a0104628-2213-4b6e-9ec9-900f2ba26ee3" 00:27:37.900 ], 00:27:37.900 "product_name": "NVMe disk", 00:27:37.900 "block_size": 4096, 00:27:37.900 "num_blocks": 1310720, 00:27:37.900 "uuid": "a0104628-2213-4b6e-9ec9-900f2ba26ee3", 00:27:37.900 "numa_id": -1, 00:27:37.900 "assigned_rate_limits": { 00:27:37.900 "rw_ios_per_sec": 0, 00:27:37.900 "rw_mbytes_per_sec": 0, 00:27:37.900 "r_mbytes_per_sec": 0, 00:27:37.900 "w_mbytes_per_sec": 0 00:27:37.900 }, 00:27:37.900 "claimed": true, 00:27:37.900 "claim_type": "read_many_write_one", 00:27:37.900 "zoned": false, 00:27:37.900 "supported_io_types": { 00:27:37.900 "read": true, 00:27:37.900 "write": true, 00:27:37.900 "unmap": true, 00:27:37.900 "flush": true, 00:27:37.900 "reset": true, 00:27:37.900 "nvme_admin": true, 00:27:37.900 "nvme_io": true, 00:27:37.900 "nvme_io_md": false, 00:27:37.900 "write_zeroes": true, 00:27:37.900 "zcopy": false, 00:27:37.900 "get_zone_info": false, 00:27:37.900 "zone_management": false, 00:27:37.900 "zone_append": false, 00:27:37.900 "compare": true, 00:27:37.900 "compare_and_write": false, 00:27:37.900 "abort": true, 00:27:37.900 "seek_hole": false, 00:27:37.900 "seek_data": false, 00:27:37.900 "copy": true, 00:27:37.900 "nvme_iov_md": false 00:27:37.900 }, 00:27:37.900 "driver_specific": { 00:27:37.900 "nvme": [ 00:27:37.900 { 00:27:37.900 "pci_address": "0000:00:11.0", 00:27:37.900 "trid": { 00:27:37.900 "trtype": "PCIe", 00:27:37.900 "traddr": "0000:00:11.0" 00:27:37.900 }, 00:27:37.900 "ctrlr_data": { 00:27:37.900 "cntlid": 0, 00:27:37.900 "vendor_id": "0x1b36", 00:27:37.900 "model_number": "QEMU NVMe Ctrl", 00:27:37.900 "serial_number": "12341", 00:27:37.900 "firmware_revision": "8.0.0", 00:27:37.900 "subnqn": "nqn.2019-08.org.qemu:12341", 00:27:37.900 "oacs": { 00:27:37.900 "security": 0, 00:27:37.900 "format": 1, 00:27:37.900 "firmware": 0, 00:27:37.900 "ns_manage": 1 00:27:37.900 }, 00:27:37.900 "multi_ctrlr": false, 00:27:37.900 "ana_reporting": false 00:27:37.900 }, 00:27:37.900 "vs": { 00:27:37.900 "nvme_version": "1.4" 00:27:37.900 }, 00:27:37.900 "ns_data": { 00:27:37.900 "id": 1, 00:27:37.900 "can_share": false 00:27:37.900 } 00:27:37.900 } 00:27:37.900 ], 00:27:37.900 "mp_policy": "active_passive" 00:27:37.900 } 00:27:37.900 } 00:27:37.901 ]' 00:27:37.901 11:38:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:27:37.901 11:38:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:27:37.901 11:38:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:27:37.901 11:38:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:27:37.901 11:38:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:27:37.901 11:38:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:27:37.901 11:38:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:27:37.901 11:38:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:27:37.901 11:38:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:27:37.901 11:38:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:37.901 11:38:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:38.184 11:38:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=c2ed6afd-cc90-4a32-aff0-0c1138e3e7cb 00:27:38.184 11:38:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:27:38.184 11:38:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c2ed6afd-cc90-4a32-aff0-0c1138e3e7cb 00:27:38.442 11:38:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:27:38.700 11:38:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=8a963e53-2b69-4337-94a8-72fcb7d769c1 00:27:38.700 11:38:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 8a963e53-2b69-4337-94a8-72fcb7d769c1 00:27:38.959 11:38:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=5366eff7-3243-491b-988f-dfc767868591 00:27:38.959 11:38:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:27:38.959 11:38:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 5366eff7-3243-491b-988f-dfc767868591 00:27:38.959 11:38:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:27:38.959 11:38:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:27:38.959 11:38:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=5366eff7-3243-491b-988f-dfc767868591 00:27:38.959 11:38:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:27:38.959 11:38:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 5366eff7-3243-491b-988f-dfc767868591 00:27:38.959 11:38:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=5366eff7-3243-491b-988f-dfc767868591 00:27:38.959 11:38:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:27:38.959 11:38:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:27:38.959 11:38:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:27:38.959 11:38:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5366eff7-3243-491b-988f-dfc767868591 00:27:38.959 11:38:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:27:38.959 { 00:27:38.959 "name": "5366eff7-3243-491b-988f-dfc767868591", 00:27:38.959 "aliases": [ 00:27:38.959 "lvs/nvme0n1p0" 00:27:38.959 ], 00:27:38.959 "product_name": "Logical Volume", 00:27:38.959 "block_size": 4096, 00:27:38.959 "num_blocks": 26476544, 00:27:38.959 "uuid": "5366eff7-3243-491b-988f-dfc767868591", 00:27:38.959 "assigned_rate_limits": { 00:27:38.959 "rw_ios_per_sec": 0, 00:27:38.959 "rw_mbytes_per_sec": 0, 00:27:38.959 "r_mbytes_per_sec": 0, 00:27:38.959 "w_mbytes_per_sec": 0 00:27:38.959 }, 00:27:38.959 "claimed": false, 00:27:38.959 "zoned": false, 00:27:38.959 "supported_io_types": { 00:27:38.959 "read": true, 00:27:38.959 "write": true, 00:27:38.959 "unmap": true, 00:27:38.959 "flush": false, 00:27:38.959 "reset": true, 00:27:38.959 "nvme_admin": false, 00:27:38.959 "nvme_io": false, 00:27:38.959 "nvme_io_md": false, 00:27:38.959 "write_zeroes": true, 00:27:38.959 "zcopy": false, 00:27:38.959 "get_zone_info": false, 00:27:38.959 "zone_management": false, 00:27:38.959 "zone_append": false, 00:27:38.959 "compare": false, 00:27:38.959 "compare_and_write": false, 00:27:38.959 "abort": false, 00:27:38.959 "seek_hole": true, 00:27:38.959 "seek_data": true, 00:27:38.959 "copy": false, 00:27:38.959 "nvme_iov_md": false 00:27:38.959 }, 00:27:38.959 "driver_specific": { 00:27:38.959 "lvol": { 00:27:38.959 "lvol_store_uuid": "8a963e53-2b69-4337-94a8-72fcb7d769c1", 00:27:38.959 "base_bdev": "nvme0n1", 00:27:38.959 "thin_provision": true, 00:27:38.959 "num_allocated_clusters": 0, 00:27:38.959 "snapshot": false, 00:27:38.959 "clone": false, 00:27:38.959 "esnap_clone": false 00:27:38.959 } 00:27:38.959 } 00:27:38.959 } 00:27:38.959 ]' 00:27:38.959 11:38:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:27:39.218 11:38:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:27:39.218 11:38:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:27:39.218 11:38:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:27:39.218 11:38:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:27:39.218 11:38:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:27:39.218 11:38:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:27:39.218 11:38:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:27:39.218 11:38:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:27:39.477 11:38:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:27:39.477 11:38:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:27:39.477 11:38:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 5366eff7-3243-491b-988f-dfc767868591 00:27:39.477 11:38:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=5366eff7-3243-491b-988f-dfc767868591 00:27:39.477 11:38:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:27:39.477 11:38:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:27:39.477 11:38:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:27:39.477 11:38:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5366eff7-3243-491b-988f-dfc767868591 00:27:39.736 11:38:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:27:39.736 { 00:27:39.736 "name": "5366eff7-3243-491b-988f-dfc767868591", 00:27:39.736 "aliases": [ 00:27:39.736 "lvs/nvme0n1p0" 00:27:39.736 ], 00:27:39.736 "product_name": "Logical Volume", 00:27:39.736 "block_size": 4096, 00:27:39.736 "num_blocks": 26476544, 00:27:39.736 "uuid": "5366eff7-3243-491b-988f-dfc767868591", 00:27:39.736 "assigned_rate_limits": { 00:27:39.736 "rw_ios_per_sec": 0, 00:27:39.736 "rw_mbytes_per_sec": 0, 00:27:39.736 "r_mbytes_per_sec": 0, 00:27:39.736 "w_mbytes_per_sec": 0 00:27:39.736 }, 00:27:39.736 "claimed": false, 00:27:39.736 "zoned": false, 00:27:39.736 "supported_io_types": { 00:27:39.736 "read": true, 00:27:39.736 "write": true, 00:27:39.736 "unmap": true, 00:27:39.736 "flush": false, 00:27:39.736 "reset": true, 00:27:39.736 "nvme_admin": false, 00:27:39.736 "nvme_io": false, 00:27:39.736 "nvme_io_md": false, 00:27:39.736 "write_zeroes": true, 00:27:39.736 "zcopy": false, 00:27:39.736 "get_zone_info": false, 00:27:39.736 "zone_management": false, 00:27:39.736 "zone_append": false, 00:27:39.736 "compare": false, 00:27:39.736 "compare_and_write": false, 00:27:39.736 "abort": false, 00:27:39.736 "seek_hole": true, 00:27:39.736 "seek_data": true, 00:27:39.736 "copy": false, 00:27:39.736 "nvme_iov_md": false 00:27:39.736 }, 00:27:39.736 "driver_specific": { 00:27:39.736 "lvol": { 00:27:39.736 "lvol_store_uuid": "8a963e53-2b69-4337-94a8-72fcb7d769c1", 00:27:39.736 "base_bdev": "nvme0n1", 00:27:39.736 "thin_provision": true, 00:27:39.736 "num_allocated_clusters": 0, 00:27:39.736 "snapshot": false, 00:27:39.736 "clone": false, 00:27:39.736 "esnap_clone": false 00:27:39.736 } 00:27:39.736 } 00:27:39.736 } 00:27:39.736 ]' 00:27:39.736 11:38:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:27:39.736 11:38:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:27:39.736 11:38:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:27:39.736 11:38:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:27:39.736 11:38:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:27:39.736 11:38:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:27:39.736 11:38:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:27:39.736 11:38:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:27:39.995 11:38:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:27:39.995 11:38:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 5366eff7-3243-491b-988f-dfc767868591 00:27:39.995 11:38:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=5366eff7-3243-491b-988f-dfc767868591 00:27:39.995 11:38:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:27:39.995 11:38:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:27:39.995 11:38:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:27:39.995 11:38:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5366eff7-3243-491b-988f-dfc767868591 00:27:39.995 11:38:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:27:39.995 { 00:27:39.995 "name": "5366eff7-3243-491b-988f-dfc767868591", 00:27:39.995 "aliases": [ 00:27:39.995 "lvs/nvme0n1p0" 00:27:39.995 ], 00:27:39.995 "product_name": "Logical Volume", 00:27:39.995 "block_size": 4096, 00:27:39.995 "num_blocks": 26476544, 00:27:39.995 "uuid": "5366eff7-3243-491b-988f-dfc767868591", 00:27:39.995 "assigned_rate_limits": { 00:27:39.995 "rw_ios_per_sec": 0, 00:27:39.995 "rw_mbytes_per_sec": 0, 00:27:39.995 "r_mbytes_per_sec": 0, 00:27:39.995 "w_mbytes_per_sec": 0 00:27:39.995 }, 00:27:39.995 "claimed": false, 00:27:39.995 "zoned": false, 00:27:39.995 "supported_io_types": { 00:27:39.995 "read": true, 00:27:39.995 "write": true, 00:27:39.995 "unmap": true, 00:27:39.995 "flush": false, 00:27:39.995 "reset": true, 00:27:39.995 "nvme_admin": false, 00:27:39.995 "nvme_io": false, 00:27:39.995 "nvme_io_md": false, 00:27:39.995 "write_zeroes": true, 00:27:39.995 "zcopy": false, 00:27:39.995 "get_zone_info": false, 00:27:39.995 "zone_management": false, 00:27:39.995 "zone_append": false, 00:27:39.995 "compare": false, 00:27:39.995 "compare_and_write": false, 00:27:39.995 "abort": false, 00:27:39.995 "seek_hole": true, 00:27:39.995 "seek_data": true, 00:27:39.995 "copy": false, 00:27:39.995 "nvme_iov_md": false 00:27:39.995 }, 00:27:39.995 "driver_specific": { 00:27:39.995 "lvol": { 00:27:39.995 "lvol_store_uuid": "8a963e53-2b69-4337-94a8-72fcb7d769c1", 00:27:39.995 "base_bdev": "nvme0n1", 00:27:39.995 "thin_provision": true, 00:27:39.995 "num_allocated_clusters": 0, 00:27:39.995 "snapshot": false, 00:27:39.995 "clone": false, 00:27:39.995 "esnap_clone": false 00:27:39.995 } 00:27:39.995 } 00:27:39.995 } 00:27:39.995 ]' 00:27:39.995 11:38:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:27:40.254 11:38:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:27:40.255 11:38:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:27:40.255 11:38:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:27:40.255 11:38:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:27:40.255 11:38:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:27:40.255 11:38:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:27:40.255 11:38:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 5366eff7-3243-491b-988f-dfc767868591 --l2p_dram_limit 10' 00:27:40.255 11:38:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:27:40.255 11:38:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:27:40.255 11:38:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:27:40.255 11:38:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 5366eff7-3243-491b-988f-dfc767868591 --l2p_dram_limit 10 -c nvc0n1p0 00:27:40.255 [2024-10-07 11:38:21.956479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.255 [2024-10-07 11:38:21.956536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:40.255 [2024-10-07 11:38:21.956555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:40.255 [2024-10-07 11:38:21.956566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.255 [2024-10-07 11:38:21.956632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.255 [2024-10-07 11:38:21.956644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:40.255 [2024-10-07 11:38:21.956658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:27:40.255 [2024-10-07 11:38:21.956668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.255 [2024-10-07 11:38:21.956701] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:40.255 [2024-10-07 11:38:21.957824] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:40.255 [2024-10-07 11:38:21.957863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.255 [2024-10-07 11:38:21.957875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:40.255 [2024-10-07 11:38:21.957889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.174 ms 00:27:40.255 [2024-10-07 11:38:21.957903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.255 [2024-10-07 11:38:21.958068] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID f8a2eac3-8451-4d8a-b5b0-3d20daacab36 00:27:40.255 [2024-10-07 11:38:21.959508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.255 [2024-10-07 11:38:21.959544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:27:40.255 [2024-10-07 11:38:21.959557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:27:40.255 [2024-10-07 11:38:21.959570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.515 [2024-10-07 11:38:21.966957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.515 [2024-10-07 11:38:21.966992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:40.515 [2024-10-07 11:38:21.967005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.352 ms 00:27:40.515 [2024-10-07 11:38:21.967019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.515 [2024-10-07 11:38:21.967117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.515 [2024-10-07 11:38:21.967134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:40.515 [2024-10-07 11:38:21.967147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:27:40.515 [2024-10-07 11:38:21.967164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.515 [2024-10-07 11:38:21.967228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.515 [2024-10-07 11:38:21.967244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:40.515 [2024-10-07 11:38:21.967255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:40.515 [2024-10-07 11:38:21.967268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.515 [2024-10-07 11:38:21.967294] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:40.515 [2024-10-07 11:38:21.972650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.515 [2024-10-07 11:38:21.972681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:40.515 [2024-10-07 11:38:21.972696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.369 ms 00:27:40.515 [2024-10-07 11:38:21.972706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.515 [2024-10-07 11:38:21.972757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.515 [2024-10-07 11:38:21.972769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:40.515 [2024-10-07 11:38:21.972782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:27:40.515 [2024-10-07 11:38:21.972795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.515 [2024-10-07 11:38:21.972843] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:27:40.515 [2024-10-07 11:38:21.972968] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:40.515 [2024-10-07 11:38:21.972988] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:40.515 [2024-10-07 11:38:21.973002] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:40.515 [2024-10-07 11:38:21.973020] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:40.515 [2024-10-07 11:38:21.973033] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:40.515 [2024-10-07 11:38:21.973046] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:40.515 [2024-10-07 11:38:21.973056] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:40.515 [2024-10-07 11:38:21.973069] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:40.515 [2024-10-07 11:38:21.973079] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:40.515 [2024-10-07 11:38:21.973092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.515 [2024-10-07 11:38:21.973111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:40.515 [2024-10-07 11:38:21.973125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.252 ms 00:27:40.515 [2024-10-07 11:38:21.973135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.515 [2024-10-07 11:38:21.973209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.515 [2024-10-07 11:38:21.973225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:40.515 [2024-10-07 11:38:21.973238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:27:40.515 [2024-10-07 11:38:21.973248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.515 [2024-10-07 11:38:21.973337] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:40.515 [2024-10-07 11:38:21.973349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:40.515 [2024-10-07 11:38:21.973361] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:40.515 [2024-10-07 11:38:21.973372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:40.515 [2024-10-07 11:38:21.973384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:40.515 [2024-10-07 11:38:21.973393] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:40.515 [2024-10-07 11:38:21.973405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:40.515 [2024-10-07 11:38:21.973415] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:40.515 [2024-10-07 11:38:21.973427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:40.515 [2024-10-07 11:38:21.973436] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:40.515 [2024-10-07 11:38:21.973448] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:40.515 [2024-10-07 11:38:21.973457] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:40.515 [2024-10-07 11:38:21.973469] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:40.515 [2024-10-07 11:38:21.973478] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:40.515 [2024-10-07 11:38:21.973490] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:40.515 [2024-10-07 11:38:21.973499] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:40.515 [2024-10-07 11:38:21.973513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:40.515 [2024-10-07 11:38:21.973522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:40.515 [2024-10-07 11:38:21.973533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:40.515 [2024-10-07 11:38:21.973543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:40.515 [2024-10-07 11:38:21.973554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:40.515 [2024-10-07 11:38:21.973564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:40.515 [2024-10-07 11:38:21.973577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:40.515 [2024-10-07 11:38:21.973586] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:40.515 [2024-10-07 11:38:21.973598] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:40.515 [2024-10-07 11:38:21.973607] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:40.515 [2024-10-07 11:38:21.973619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:40.515 [2024-10-07 11:38:21.973628] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:40.515 [2024-10-07 11:38:21.973640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:40.515 [2024-10-07 11:38:21.973649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:40.515 [2024-10-07 11:38:21.973661] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:40.515 [2024-10-07 11:38:21.973670] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:40.515 [2024-10-07 11:38:21.973684] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:40.515 [2024-10-07 11:38:21.973693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:40.515 [2024-10-07 11:38:21.973704] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:40.515 [2024-10-07 11:38:21.973714] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:40.515 [2024-10-07 11:38:21.973725] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:40.515 [2024-10-07 11:38:21.973734] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:40.515 [2024-10-07 11:38:21.973757] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:40.515 [2024-10-07 11:38:21.973766] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:40.515 [2024-10-07 11:38:21.973778] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:40.515 [2024-10-07 11:38:21.973787] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:40.515 [2024-10-07 11:38:21.973799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:40.515 [2024-10-07 11:38:21.973808] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:40.515 [2024-10-07 11:38:21.973820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:40.515 [2024-10-07 11:38:21.973833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:40.516 [2024-10-07 11:38:21.973846] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:40.516 [2024-10-07 11:38:21.973857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:40.516 [2024-10-07 11:38:21.973873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:40.516 [2024-10-07 11:38:21.973883] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:40.516 [2024-10-07 11:38:21.973895] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:40.516 [2024-10-07 11:38:21.973905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:40.516 [2024-10-07 11:38:21.973917] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:40.516 [2024-10-07 11:38:21.973931] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:40.516 [2024-10-07 11:38:21.973945] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:40.516 [2024-10-07 11:38:21.973957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:40.516 [2024-10-07 11:38:21.973970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:40.516 [2024-10-07 11:38:21.973980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:40.516 [2024-10-07 11:38:21.973994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:40.516 [2024-10-07 11:38:21.974005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:40.516 [2024-10-07 11:38:21.974018] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:40.516 [2024-10-07 11:38:21.974028] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:40.516 [2024-10-07 11:38:21.974041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:40.516 [2024-10-07 11:38:21.974052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:40.516 [2024-10-07 11:38:21.974067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:40.516 [2024-10-07 11:38:21.974078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:40.516 [2024-10-07 11:38:21.974090] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:40.516 [2024-10-07 11:38:21.974101] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:40.516 [2024-10-07 11:38:21.974113] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:40.516 [2024-10-07 11:38:21.974123] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:40.516 [2024-10-07 11:38:21.974137] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:40.516 [2024-10-07 11:38:21.974148] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:40.516 [2024-10-07 11:38:21.974163] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:40.516 [2024-10-07 11:38:21.974174] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:40.516 [2024-10-07 11:38:21.974186] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:40.516 [2024-10-07 11:38:21.974197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.516 [2024-10-07 11:38:21.974210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:40.516 [2024-10-07 11:38:21.974221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.920 ms 00:27:40.516 [2024-10-07 11:38:21.974233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.516 [2024-10-07 11:38:21.974278] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:27:40.516 [2024-10-07 11:38:21.974314] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:27:43.869 [2024-10-07 11:38:25.041433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.869 [2024-10-07 11:38:25.041505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:27:43.869 [2024-10-07 11:38:25.041521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3072.133 ms 00:27:43.869 [2024-10-07 11:38:25.041551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.869 [2024-10-07 11:38:25.078776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.869 [2024-10-07 11:38:25.078832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:43.869 [2024-10-07 11:38:25.078848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.834 ms 00:27:43.869 [2024-10-07 11:38:25.078862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.869 [2024-10-07 11:38:25.078997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.869 [2024-10-07 11:38:25.079013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:43.869 [2024-10-07 11:38:25.079024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:27:43.869 [2024-10-07 11:38:25.079041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.869 [2024-10-07 11:38:25.136981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.869 [2024-10-07 11:38:25.137033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:43.869 [2024-10-07 11:38:25.137058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.987 ms 00:27:43.869 [2024-10-07 11:38:25.137075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.869 [2024-10-07 11:38:25.137117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.869 [2024-10-07 11:38:25.137135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:43.869 [2024-10-07 11:38:25.137150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:43.869 [2024-10-07 11:38:25.137179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.869 [2024-10-07 11:38:25.137717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.869 [2024-10-07 11:38:25.137763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:43.869 [2024-10-07 11:38:25.137778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.442 ms 00:27:43.869 [2024-10-07 11:38:25.137799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.869 [2024-10-07 11:38:25.137926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.869 [2024-10-07 11:38:25.137944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:43.869 [2024-10-07 11:38:25.137958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:27:43.869 [2024-10-07 11:38:25.137977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.870 [2024-10-07 11:38:25.161020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.870 [2024-10-07 11:38:25.161064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:43.870 [2024-10-07 11:38:25.161078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.054 ms 00:27:43.870 [2024-10-07 11:38:25.161091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.870 [2024-10-07 11:38:25.173552] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:43.870 [2024-10-07 11:38:25.176746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.870 [2024-10-07 11:38:25.176775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:43.870 [2024-10-07 11:38:25.176791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.578 ms 00:27:43.870 [2024-10-07 11:38:25.176804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.870 [2024-10-07 11:38:25.260687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.870 [2024-10-07 11:38:25.260754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:27:43.870 [2024-10-07 11:38:25.260777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.965 ms 00:27:43.870 [2024-10-07 11:38:25.260804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.870 [2024-10-07 11:38:25.260998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.870 [2024-10-07 11:38:25.261012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:43.870 [2024-10-07 11:38:25.261029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:27:43.870 [2024-10-07 11:38:25.261039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.870 [2024-10-07 11:38:25.296982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.870 [2024-10-07 11:38:25.297021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:27:43.870 [2024-10-07 11:38:25.297037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.943 ms 00:27:43.870 [2024-10-07 11:38:25.297048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.870 [2024-10-07 11:38:25.332340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.870 [2024-10-07 11:38:25.332375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:27:43.870 [2024-10-07 11:38:25.332408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.288 ms 00:27:43.870 [2024-10-07 11:38:25.332418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.870 [2024-10-07 11:38:25.333145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.870 [2024-10-07 11:38:25.333174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:43.870 [2024-10-07 11:38:25.333188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.687 ms 00:27:43.870 [2024-10-07 11:38:25.333198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.870 [2024-10-07 11:38:25.429522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.870 [2024-10-07 11:38:25.429566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:27:43.870 [2024-10-07 11:38:25.429586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 96.422 ms 00:27:43.870 [2024-10-07 11:38:25.429601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.870 [2024-10-07 11:38:25.466825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.870 [2024-10-07 11:38:25.466867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:27:43.870 [2024-10-07 11:38:25.466884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.182 ms 00:27:43.870 [2024-10-07 11:38:25.466896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.870 [2024-10-07 11:38:25.502194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.870 [2024-10-07 11:38:25.502230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:27:43.870 [2024-10-07 11:38:25.502261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.309 ms 00:27:43.870 [2024-10-07 11:38:25.502272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.870 [2024-10-07 11:38:25.538931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.870 [2024-10-07 11:38:25.538968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:43.870 [2024-10-07 11:38:25.538985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.666 ms 00:27:43.870 [2024-10-07 11:38:25.538995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.870 [2024-10-07 11:38:25.539041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.870 [2024-10-07 11:38:25.539053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:43.870 [2024-10-07 11:38:25.539070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:43.870 [2024-10-07 11:38:25.539082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.870 [2024-10-07 11:38:25.539202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.870 [2024-10-07 11:38:25.539215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:43.870 [2024-10-07 11:38:25.539229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:27:43.870 [2024-10-07 11:38:25.539239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.870 [2024-10-07 11:38:25.540270] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3589.191 ms, result 0 00:27:43.870 { 00:27:43.870 "name": "ftl0", 00:27:43.870 "uuid": "f8a2eac3-8451-4d8a-b5b0-3d20daacab36" 00:27:43.870 } 00:27:43.870 11:38:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:27:43.870 11:38:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:27:44.129 11:38:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:27:44.129 11:38:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:27:44.129 11:38:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:27:44.389 /dev/nbd0 00:27:44.389 11:38:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:27:44.389 11:38:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:27:44.389 11:38:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # local i 00:27:44.389 11:38:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:27:44.389 11:38:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:27:44.389 11:38:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:27:44.389 11:38:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # break 00:27:44.389 11:38:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:27:44.389 11:38:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:27:44.389 11:38:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:27:44.389 1+0 records in 00:27:44.389 1+0 records out 00:27:44.389 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00050609 s, 8.1 MB/s 00:27:44.389 11:38:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:27:44.389 11:38:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # size=4096 00:27:44.389 11:38:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:27:44.389 11:38:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:27:44.389 11:38:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # return 0 00:27:44.389 11:38:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:27:44.389 [2024-10-07 11:38:26.085680] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:27:44.389 [2024-10-07 11:38:26.085812] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79122 ] 00:27:44.648 [2024-10-07 11:38:26.243517] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:44.906 [2024-10-07 11:38:26.454409] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:27:46.281  [2024-10-07T11:38:28.927Z] Copying: 201/1024 [MB] (201 MBps) [2024-10-07T11:38:29.863Z] Copying: 402/1024 [MB] (201 MBps) [2024-10-07T11:38:30.797Z] Copying: 604/1024 [MB] (202 MBps) [2024-10-07T11:38:32.173Z] Copying: 801/1024 [MB] (196 MBps) [2024-10-07T11:38:32.173Z] Copying: 985/1024 [MB] (184 MBps) [2024-10-07T11:38:33.642Z] Copying: 1024/1024 [MB] (average 196 MBps) 00:27:51.931 00:27:51.931 11:38:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:27:53.309 11:38:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:27:53.568 [2024-10-07 11:38:35.090079] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:27:53.568 [2024-10-07 11:38:35.090218] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79216 ] 00:27:53.568 [2024-10-07 11:38:35.262525] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:53.827 [2024-10-07 11:38:35.481072] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:27:55.202  [2024-10-07T11:38:37.846Z] Copying: 17/1024 [MB] (17 MBps) [2024-10-07T11:38:39.220Z] Copying: 35/1024 [MB] (17 MBps) [2024-10-07T11:38:40.188Z] Copying: 52/1024 [MB] (17 MBps) [2024-10-07T11:38:41.125Z] Copying: 70/1024 [MB] (18 MBps) [2024-10-07T11:38:42.061Z] Copying: 88/1024 [MB] (17 MBps) [2024-10-07T11:38:42.998Z] Copying: 106/1024 [MB] (17 MBps) [2024-10-07T11:38:43.933Z] Copying: 124/1024 [MB] (18 MBps) [2024-10-07T11:38:44.868Z] Copying: 142/1024 [MB] (18 MBps) [2024-10-07T11:38:45.811Z] Copying: 161/1024 [MB] (18 MBps) [2024-10-07T11:38:47.195Z] Copying: 177/1024 [MB] (16 MBps) [2024-10-07T11:38:48.151Z] Copying: 194/1024 [MB] (17 MBps) [2024-10-07T11:38:49.085Z] Copying: 211/1024 [MB] (17 MBps) [2024-10-07T11:38:50.020Z] Copying: 228/1024 [MB] (17 MBps) [2024-10-07T11:38:50.956Z] Copying: 246/1024 [MB] (17 MBps) [2024-10-07T11:38:51.890Z] Copying: 264/1024 [MB] (17 MBps) [2024-10-07T11:38:52.826Z] Copying: 281/1024 [MB] (17 MBps) [2024-10-07T11:38:54.202Z] Copying: 298/1024 [MB] (17 MBps) [2024-10-07T11:38:55.163Z] Copying: 316/1024 [MB] (17 MBps) [2024-10-07T11:38:56.099Z] Copying: 333/1024 [MB] (17 MBps) [2024-10-07T11:38:57.068Z] Copying: 350/1024 [MB] (17 MBps) [2024-10-07T11:38:58.003Z] Copying: 367/1024 [MB] (17 MBps) [2024-10-07T11:38:58.941Z] Copying: 385/1024 [MB] (17 MBps) [2024-10-07T11:38:59.874Z] Copying: 402/1024 [MB] (17 MBps) [2024-10-07T11:39:00.807Z] Copying: 419/1024 [MB] (17 MBps) [2024-10-07T11:39:02.195Z] Copying: 436/1024 [MB] (17 MBps) [2024-10-07T11:39:02.779Z] Copying: 453/1024 [MB] (16 MBps) [2024-10-07T11:39:04.153Z] Copying: 470/1024 [MB] (17 MBps) [2024-10-07T11:39:05.088Z] Copying: 488/1024 [MB] (17 MBps) [2024-10-07T11:39:06.022Z] Copying: 505/1024 [MB] (17 MBps) [2024-10-07T11:39:06.957Z] Copying: 522/1024 [MB] (17 MBps) [2024-10-07T11:39:07.891Z] Copying: 539/1024 [MB] (17 MBps) [2024-10-07T11:39:08.830Z] Copying: 557/1024 [MB] (17 MBps) [2024-10-07T11:39:09.796Z] Copying: 574/1024 [MB] (17 MBps) [2024-10-07T11:39:11.170Z] Copying: 592/1024 [MB] (17 MBps) [2024-10-07T11:39:12.104Z] Copying: 609/1024 [MB] (17 MBps) [2024-10-07T11:39:13.038Z] Copying: 626/1024 [MB] (17 MBps) [2024-10-07T11:39:13.971Z] Copying: 643/1024 [MB] (17 MBps) [2024-10-07T11:39:14.905Z] Copying: 660/1024 [MB] (17 MBps) [2024-10-07T11:39:15.839Z] Copying: 678/1024 [MB] (17 MBps) [2024-10-07T11:39:16.802Z] Copying: 695/1024 [MB] (17 MBps) [2024-10-07T11:39:17.741Z] Copying: 713/1024 [MB] (17 MBps) [2024-10-07T11:39:19.115Z] Copying: 730/1024 [MB] (17 MBps) [2024-10-07T11:39:20.050Z] Copying: 748/1024 [MB] (17 MBps) [2024-10-07T11:39:20.985Z] Copying: 765/1024 [MB] (17 MBps) [2024-10-07T11:39:21.919Z] Copying: 784/1024 [MB] (18 MBps) [2024-10-07T11:39:22.854Z] Copying: 802/1024 [MB] (18 MBps) [2024-10-07T11:39:23.788Z] Copying: 821/1024 [MB] (18 MBps) [2024-10-07T11:39:24.748Z] Copying: 839/1024 [MB] (17 MBps) [2024-10-07T11:39:26.124Z] Copying: 856/1024 [MB] (17 MBps) [2024-10-07T11:39:27.061Z] Copying: 874/1024 [MB] (17 MBps) [2024-10-07T11:39:28.000Z] Copying: 891/1024 [MB] (17 MBps) [2024-10-07T11:39:28.936Z] Copying: 909/1024 [MB] (17 MBps) [2024-10-07T11:39:29.874Z] Copying: 926/1024 [MB] (17 MBps) [2024-10-07T11:39:30.808Z] Copying: 944/1024 [MB] (17 MBps) [2024-10-07T11:39:31.759Z] Copying: 962/1024 [MB] (17 MBps) [2024-10-07T11:39:33.134Z] Copying: 980/1024 [MB] (17 MBps) [2024-10-07T11:39:34.069Z] Copying: 997/1024 [MB] (17 MBps) [2024-10-07T11:39:34.327Z] Copying: 1015/1024 [MB] (17 MBps) [2024-10-07T11:39:35.703Z] Copying: 1024/1024 [MB] (average 17 MBps) 00:28:53.992 00:28:53.992 11:39:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:28:53.992 11:39:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:28:54.250 11:39:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:28:54.250 [2024-10-07 11:39:35.939834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.250 [2024-10-07 11:39:35.939892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:54.250 [2024-10-07 11:39:35.939907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:54.250 [2024-10-07 11:39:35.939936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.250 [2024-10-07 11:39:35.939962] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:54.250 [2024-10-07 11:39:35.944207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.250 [2024-10-07 11:39:35.944243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:54.250 [2024-10-07 11:39:35.944260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.229 ms 00:28:54.250 [2024-10-07 11:39:35.944271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.251 [2024-10-07 11:39:35.946632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.251 [2024-10-07 11:39:35.946672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:54.251 [2024-10-07 11:39:35.946692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.325 ms 00:28:54.251 [2024-10-07 11:39:35.946703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.510 [2024-10-07 11:39:35.965341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.510 [2024-10-07 11:39:35.965383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:54.510 [2024-10-07 11:39:35.965416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.642 ms 00:28:54.510 [2024-10-07 11:39:35.965427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.510 [2024-10-07 11:39:35.970517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.510 [2024-10-07 11:39:35.970551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:54.510 [2024-10-07 11:39:35.970576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.053 ms 00:28:54.510 [2024-10-07 11:39:35.970586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.510 [2024-10-07 11:39:36.007507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.510 [2024-10-07 11:39:36.007544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:54.510 [2024-10-07 11:39:36.007561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.893 ms 00:28:54.510 [2024-10-07 11:39:36.007571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.510 [2024-10-07 11:39:36.029333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.510 [2024-10-07 11:39:36.029373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:54.510 [2024-10-07 11:39:36.029391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.749 ms 00:28:54.510 [2024-10-07 11:39:36.029402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.510 [2024-10-07 11:39:36.029555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.510 [2024-10-07 11:39:36.029569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:54.510 [2024-10-07 11:39:36.029583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:28:54.510 [2024-10-07 11:39:36.029598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.510 [2024-10-07 11:39:36.066510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.510 [2024-10-07 11:39:36.066549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:54.510 [2024-10-07 11:39:36.066566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.950 ms 00:28:54.510 [2024-10-07 11:39:36.066577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.510 [2024-10-07 11:39:36.103100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.510 [2024-10-07 11:39:36.103147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:54.510 [2024-10-07 11:39:36.103164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.534 ms 00:28:54.510 [2024-10-07 11:39:36.103174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.510 [2024-10-07 11:39:36.139108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.510 [2024-10-07 11:39:36.139152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:54.510 [2024-10-07 11:39:36.139170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.922 ms 00:28:54.510 [2024-10-07 11:39:36.139180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.510 [2024-10-07 11:39:36.174845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.510 [2024-10-07 11:39:36.174884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:54.510 [2024-10-07 11:39:36.174901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.616 ms 00:28:54.510 [2024-10-07 11:39:36.174912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.511 [2024-10-07 11:39:36.174957] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:54.511 [2024-10-07 11:39:36.174975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.174991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.175988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.176001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.176012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.176025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.176036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.176049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.176060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.176076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.176087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.176100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.176111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.176124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:54.511 [2024-10-07 11:39:36.176135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:54.512 [2024-10-07 11:39:36.176148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:54.512 [2024-10-07 11:39:36.176159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:54.512 [2024-10-07 11:39:36.176173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:54.512 [2024-10-07 11:39:36.176184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:54.512 [2024-10-07 11:39:36.176197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:54.512 [2024-10-07 11:39:36.176209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:54.512 [2024-10-07 11:39:36.176223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:54.512 [2024-10-07 11:39:36.176242] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:54.512 [2024-10-07 11:39:36.176258] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f8a2eac3-8451-4d8a-b5b0-3d20daacab36 00:28:54.512 [2024-10-07 11:39:36.176269] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:54.512 [2024-10-07 11:39:36.176283] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:54.512 [2024-10-07 11:39:36.176293] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:54.512 [2024-10-07 11:39:36.176306] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:54.512 [2024-10-07 11:39:36.176316] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:54.512 [2024-10-07 11:39:36.176328] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:54.512 [2024-10-07 11:39:36.176338] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:54.512 [2024-10-07 11:39:36.176350] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:54.512 [2024-10-07 11:39:36.176359] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:54.512 [2024-10-07 11:39:36.176371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.512 [2024-10-07 11:39:36.176381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:54.512 [2024-10-07 11:39:36.176394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.419 ms 00:28:54.512 [2024-10-07 11:39:36.176404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.512 [2024-10-07 11:39:36.196108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.512 [2024-10-07 11:39:36.196142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:54.512 [2024-10-07 11:39:36.196173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.674 ms 00:28:54.512 [2024-10-07 11:39:36.196185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.512 [2024-10-07 11:39:36.196714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.512 [2024-10-07 11:39:36.196731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:54.512 [2024-10-07 11:39:36.196759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.500 ms 00:28:54.512 [2024-10-07 11:39:36.196773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.770 [2024-10-07 11:39:36.255179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:54.770 [2024-10-07 11:39:36.255222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:54.770 [2024-10-07 11:39:36.255238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:54.770 [2024-10-07 11:39:36.255250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.770 [2024-10-07 11:39:36.255312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:54.770 [2024-10-07 11:39:36.255324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:54.770 [2024-10-07 11:39:36.255338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:54.770 [2024-10-07 11:39:36.255351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.770 [2024-10-07 11:39:36.255440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:54.770 [2024-10-07 11:39:36.255454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:54.770 [2024-10-07 11:39:36.255467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:54.770 [2024-10-07 11:39:36.255484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.770 [2024-10-07 11:39:36.255511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:54.770 [2024-10-07 11:39:36.255521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:54.770 [2024-10-07 11:39:36.255534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:54.770 [2024-10-07 11:39:36.255544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.770 [2024-10-07 11:39:36.380524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:54.770 [2024-10-07 11:39:36.380596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:54.770 [2024-10-07 11:39:36.380615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:54.770 [2024-10-07 11:39:36.380643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:55.029 [2024-10-07 11:39:36.484398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:55.029 [2024-10-07 11:39:36.484482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:55.029 [2024-10-07 11:39:36.484501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:55.029 [2024-10-07 11:39:36.484516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:55.029 [2024-10-07 11:39:36.484658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:55.029 [2024-10-07 11:39:36.484672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:55.029 [2024-10-07 11:39:36.484686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:55.029 [2024-10-07 11:39:36.484696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:55.029 [2024-10-07 11:39:36.484780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:55.029 [2024-10-07 11:39:36.484793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:55.029 [2024-10-07 11:39:36.484808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:55.029 [2024-10-07 11:39:36.484818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:55.029 [2024-10-07 11:39:36.484945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:55.029 [2024-10-07 11:39:36.484959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:55.029 [2024-10-07 11:39:36.484972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:55.029 [2024-10-07 11:39:36.484982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:55.029 [2024-10-07 11:39:36.485024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:55.029 [2024-10-07 11:39:36.485036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:55.029 [2024-10-07 11:39:36.485049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:55.029 [2024-10-07 11:39:36.485059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:55.029 [2024-10-07 11:39:36.485104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:55.029 [2024-10-07 11:39:36.485117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:55.029 [2024-10-07 11:39:36.485130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:55.029 [2024-10-07 11:39:36.485140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:55.029 [2024-10-07 11:39:36.485192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:55.029 [2024-10-07 11:39:36.485204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:55.029 [2024-10-07 11:39:36.485217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:55.029 [2024-10-07 11:39:36.485227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:55.029 [2024-10-07 11:39:36.485361] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 546.378 ms, result 0 00:28:55.029 true 00:28:55.029 11:39:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 78978 00:28:55.029 11:39:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid78978 00:28:55.029 11:39:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:28:55.029 [2024-10-07 11:39:36.613498] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:28:55.029 [2024-10-07 11:39:36.613641] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79846 ] 00:28:55.287 [2024-10-07 11:39:36.785728] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:55.287 [2024-10-07 11:39:36.996922] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:28:56.662  [2024-10-07T11:39:39.749Z] Copying: 196/1024 [MB] (196 MBps) [2024-10-07T11:39:40.317Z] Copying: 393/1024 [MB] (197 MBps) [2024-10-07T11:39:41.694Z] Copying: 595/1024 [MB] (202 MBps) [2024-10-07T11:39:42.631Z] Copying: 795/1024 [MB] (199 MBps) [2024-10-07T11:39:42.631Z] Copying: 995/1024 [MB] (199 MBps) [2024-10-07T11:39:44.005Z] Copying: 1024/1024 [MB] (average 199 MBps) 00:29:02.294 00:29:02.294 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 78978 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:29:02.294 11:39:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:02.294 [2024-10-07 11:39:43.814299] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:29:02.294 [2024-10-07 11:39:43.814434] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79921 ] 00:29:02.294 [2024-10-07 11:39:43.986498] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:02.552 [2024-10-07 11:39:44.191432] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:29:03.118 [2024-10-07 11:39:44.529811] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:03.118 [2024-10-07 11:39:44.529868] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:03.118 [2024-10-07 11:39:44.595935] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:29:03.118 [2024-10-07 11:39:44.596290] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:29:03.118 [2024-10-07 11:39:44.596472] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:29:03.376 [2024-10-07 11:39:44.906205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.376 [2024-10-07 11:39:44.906253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:03.376 [2024-10-07 11:39:44.906268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:03.376 [2024-10-07 11:39:44.906278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.376 [2024-10-07 11:39:44.906348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.376 [2024-10-07 11:39:44.906361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:03.376 [2024-10-07 11:39:44.906372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:29:03.376 [2024-10-07 11:39:44.906385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.376 [2024-10-07 11:39:44.906406] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:03.376 [2024-10-07 11:39:44.907466] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:03.376 [2024-10-07 11:39:44.907487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.376 [2024-10-07 11:39:44.907501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:03.376 [2024-10-07 11:39:44.907512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.087 ms 00:29:03.376 [2024-10-07 11:39:44.907522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.376 [2024-10-07 11:39:44.908947] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:03.376 [2024-10-07 11:39:44.928012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.376 [2024-10-07 11:39:44.928046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:03.376 [2024-10-07 11:39:44.928061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.096 ms 00:29:03.376 [2024-10-07 11:39:44.928071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.376 [2024-10-07 11:39:44.928135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.376 [2024-10-07 11:39:44.928148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:03.376 [2024-10-07 11:39:44.928162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:29:03.376 [2024-10-07 11:39:44.928171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.376 [2024-10-07 11:39:44.934798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.376 [2024-10-07 11:39:44.934823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:03.376 [2024-10-07 11:39:44.934835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.567 ms 00:29:03.376 [2024-10-07 11:39:44.934845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.376 [2024-10-07 11:39:44.934925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.376 [2024-10-07 11:39:44.934938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:03.376 [2024-10-07 11:39:44.934949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:29:03.376 [2024-10-07 11:39:44.934959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.376 [2024-10-07 11:39:44.934999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.376 [2024-10-07 11:39:44.935011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:03.376 [2024-10-07 11:39:44.935023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:03.376 [2024-10-07 11:39:44.935033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.376 [2024-10-07 11:39:44.935057] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:03.376 [2024-10-07 11:39:44.939901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.376 [2024-10-07 11:39:44.939930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:03.376 [2024-10-07 11:39:44.939942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.857 ms 00:29:03.376 [2024-10-07 11:39:44.939969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.376 [2024-10-07 11:39:44.940004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.376 [2024-10-07 11:39:44.940015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:03.376 [2024-10-07 11:39:44.940026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:29:03.376 [2024-10-07 11:39:44.940036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.376 [2024-10-07 11:39:44.940093] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:03.376 [2024-10-07 11:39:44.940115] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:03.376 [2024-10-07 11:39:44.940152] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:03.376 [2024-10-07 11:39:44.940173] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:03.376 [2024-10-07 11:39:44.940265] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:03.376 [2024-10-07 11:39:44.940279] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:03.376 [2024-10-07 11:39:44.940293] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:03.376 [2024-10-07 11:39:44.940306] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:03.376 [2024-10-07 11:39:44.940318] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:03.376 [2024-10-07 11:39:44.940329] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:03.376 [2024-10-07 11:39:44.940340] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:03.376 [2024-10-07 11:39:44.940350] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:03.376 [2024-10-07 11:39:44.940360] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:03.376 [2024-10-07 11:39:44.940371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.376 [2024-10-07 11:39:44.940384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:03.376 [2024-10-07 11:39:44.940394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:29:03.376 [2024-10-07 11:39:44.940405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.376 [2024-10-07 11:39:44.940477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.376 [2024-10-07 11:39:44.940488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:03.376 [2024-10-07 11:39:44.940499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:29:03.376 [2024-10-07 11:39:44.940509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.376 [2024-10-07 11:39:44.940606] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:03.376 [2024-10-07 11:39:44.940622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:03.376 [2024-10-07 11:39:44.940636] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:03.376 [2024-10-07 11:39:44.940647] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:03.376 [2024-10-07 11:39:44.940658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:03.376 [2024-10-07 11:39:44.940667] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:03.376 [2024-10-07 11:39:44.940677] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:03.376 [2024-10-07 11:39:44.940687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:03.376 [2024-10-07 11:39:44.940697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:03.376 [2024-10-07 11:39:44.940715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:03.376 [2024-10-07 11:39:44.940726] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:03.376 [2024-10-07 11:39:44.940736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:03.376 [2024-10-07 11:39:44.940746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:03.376 [2024-10-07 11:39:44.940769] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:03.376 [2024-10-07 11:39:44.940780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:03.376 [2024-10-07 11:39:44.940789] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:03.376 [2024-10-07 11:39:44.940799] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:03.376 [2024-10-07 11:39:44.940808] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:03.376 [2024-10-07 11:39:44.940818] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:03.376 [2024-10-07 11:39:44.940827] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:03.377 [2024-10-07 11:39:44.940837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:03.377 [2024-10-07 11:39:44.940845] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:03.377 [2024-10-07 11:39:44.940855] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:03.377 [2024-10-07 11:39:44.940865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:03.377 [2024-10-07 11:39:44.940874] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:03.377 [2024-10-07 11:39:44.940882] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:03.377 [2024-10-07 11:39:44.940891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:03.377 [2024-10-07 11:39:44.940900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:03.377 [2024-10-07 11:39:44.940910] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:03.377 [2024-10-07 11:39:44.940919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:03.377 [2024-10-07 11:39:44.940928] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:03.377 [2024-10-07 11:39:44.940937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:03.377 [2024-10-07 11:39:44.940946] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:03.377 [2024-10-07 11:39:44.940955] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:03.377 [2024-10-07 11:39:44.940964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:03.377 [2024-10-07 11:39:44.940973] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:03.377 [2024-10-07 11:39:44.940982] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:03.377 [2024-10-07 11:39:44.940991] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:03.377 [2024-10-07 11:39:44.941001] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:03.377 [2024-10-07 11:39:44.941010] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:03.377 [2024-10-07 11:39:44.941019] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:03.377 [2024-10-07 11:39:44.941029] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:03.377 [2024-10-07 11:39:44.941039] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:03.377 [2024-10-07 11:39:44.941049] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:03.377 [2024-10-07 11:39:44.941059] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:03.377 [2024-10-07 11:39:44.941068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:03.377 [2024-10-07 11:39:44.941078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:03.377 [2024-10-07 11:39:44.941088] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:03.377 [2024-10-07 11:39:44.941098] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:03.377 [2024-10-07 11:39:44.941108] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:03.377 [2024-10-07 11:39:44.941117] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:03.377 [2024-10-07 11:39:44.941126] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:03.377 [2024-10-07 11:39:44.941135] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:03.377 [2024-10-07 11:39:44.941145] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:03.377 [2024-10-07 11:39:44.941158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:03.377 [2024-10-07 11:39:44.941170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:03.377 [2024-10-07 11:39:44.941180] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:03.377 [2024-10-07 11:39:44.941190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:03.377 [2024-10-07 11:39:44.941200] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:03.377 [2024-10-07 11:39:44.941211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:03.377 [2024-10-07 11:39:44.941221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:03.377 [2024-10-07 11:39:44.941232] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:03.377 [2024-10-07 11:39:44.941242] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:03.377 [2024-10-07 11:39:44.941253] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:03.377 [2024-10-07 11:39:44.941263] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:03.377 [2024-10-07 11:39:44.941273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:03.377 [2024-10-07 11:39:44.941283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:03.377 [2024-10-07 11:39:44.941293] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:03.377 [2024-10-07 11:39:44.941303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:03.377 [2024-10-07 11:39:44.941313] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:03.377 [2024-10-07 11:39:44.941324] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:03.377 [2024-10-07 11:39:44.941339] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:03.377 [2024-10-07 11:39:44.941349] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:03.377 [2024-10-07 11:39:44.941359] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:03.377 [2024-10-07 11:39:44.941372] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:03.377 [2024-10-07 11:39:44.941383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.377 [2024-10-07 11:39:44.941394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:03.377 [2024-10-07 11:39:44.941405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.833 ms 00:29:03.377 [2024-10-07 11:39:44.941415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.377 [2024-10-07 11:39:44.991873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.377 [2024-10-07 11:39:44.991908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:03.377 [2024-10-07 11:39:44.991922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.491 ms 00:29:03.377 [2024-10-07 11:39:44.991932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.377 [2024-10-07 11:39:44.992014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.377 [2024-10-07 11:39:44.992025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:03.377 [2024-10-07 11:39:44.992036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:29:03.377 [2024-10-07 11:39:44.992046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.377 [2024-10-07 11:39:45.035092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.377 [2024-10-07 11:39:45.035129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:03.377 [2024-10-07 11:39:45.035143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.055 ms 00:29:03.377 [2024-10-07 11:39:45.035154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.377 [2024-10-07 11:39:45.035193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.377 [2024-10-07 11:39:45.035205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:03.377 [2024-10-07 11:39:45.035216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:03.377 [2024-10-07 11:39:45.035226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.377 [2024-10-07 11:39:45.035723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.377 [2024-10-07 11:39:45.035754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:03.377 [2024-10-07 11:39:45.035767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.418 ms 00:29:03.377 [2024-10-07 11:39:45.035777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.377 [2024-10-07 11:39:45.035896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.377 [2024-10-07 11:39:45.035910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:03.377 [2024-10-07 11:39:45.035922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:29:03.377 [2024-10-07 11:39:45.035932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.377 [2024-10-07 11:39:45.054073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.377 [2024-10-07 11:39:45.054103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:03.377 [2024-10-07 11:39:45.054116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.149 ms 00:29:03.377 [2024-10-07 11:39:45.054127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.377 [2024-10-07 11:39:45.073601] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:03.377 [2024-10-07 11:39:45.073654] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:03.377 [2024-10-07 11:39:45.073675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.377 [2024-10-07 11:39:45.073686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:03.377 [2024-10-07 11:39:45.073699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.466 ms 00:29:03.377 [2024-10-07 11:39:45.073710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.635 [2024-10-07 11:39:45.103853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.635 [2024-10-07 11:39:45.103896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:03.635 [2024-10-07 11:39:45.103917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.131 ms 00:29:03.635 [2024-10-07 11:39:45.103927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.635 [2024-10-07 11:39:45.122122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.635 [2024-10-07 11:39:45.122158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:03.635 [2024-10-07 11:39:45.122172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.170 ms 00:29:03.635 [2024-10-07 11:39:45.122182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.635 [2024-10-07 11:39:45.140881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.635 [2024-10-07 11:39:45.140917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:03.635 [2024-10-07 11:39:45.140931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.688 ms 00:29:03.635 [2024-10-07 11:39:45.140942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.635 [2024-10-07 11:39:45.141835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.635 [2024-10-07 11:39:45.141860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:03.635 [2024-10-07 11:39:45.141872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.772 ms 00:29:03.635 [2024-10-07 11:39:45.141883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.635 [2024-10-07 11:39:45.227666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.635 [2024-10-07 11:39:45.227736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:03.635 [2024-10-07 11:39:45.227760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.897 ms 00:29:03.635 [2024-10-07 11:39:45.227771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.635 [2024-10-07 11:39:45.239376] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:03.635 [2024-10-07 11:39:45.242572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.635 [2024-10-07 11:39:45.242604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:03.635 [2024-10-07 11:39:45.242620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.744 ms 00:29:03.635 [2024-10-07 11:39:45.242631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.635 [2024-10-07 11:39:45.242754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.635 [2024-10-07 11:39:45.242768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:03.635 [2024-10-07 11:39:45.242780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:03.635 [2024-10-07 11:39:45.242791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.635 [2024-10-07 11:39:45.242885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.635 [2024-10-07 11:39:45.242903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:03.635 [2024-10-07 11:39:45.242914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:29:03.635 [2024-10-07 11:39:45.242924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.635 [2024-10-07 11:39:45.242951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.635 [2024-10-07 11:39:45.242963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:03.635 [2024-10-07 11:39:45.242973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:03.635 [2024-10-07 11:39:45.242983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.635 [2024-10-07 11:39:45.243017] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:03.635 [2024-10-07 11:39:45.243032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.635 [2024-10-07 11:39:45.243042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:03.635 [2024-10-07 11:39:45.243055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:29:03.635 [2024-10-07 11:39:45.243066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.635 [2024-10-07 11:39:45.279453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.635 [2024-10-07 11:39:45.279514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:03.635 [2024-10-07 11:39:45.279532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.423 ms 00:29:03.635 [2024-10-07 11:39:45.279543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.635 [2024-10-07 11:39:45.279633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.635 [2024-10-07 11:39:45.279646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:03.635 [2024-10-07 11:39:45.279657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:29:03.635 [2024-10-07 11:39:45.279668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.635 [2024-10-07 11:39:45.280803] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 374.725 ms, result 0 00:29:04.604  [2024-10-07T11:39:47.689Z] Copying: 26/1024 [MB] (26 MBps) [2024-10-07T11:39:48.622Z] Copying: 53/1024 [MB] (26 MBps) [2024-10-07T11:39:49.556Z] Copying: 79/1024 [MB] (26 MBps) [2024-10-07T11:39:50.491Z] Copying: 105/1024 [MB] (26 MBps) [2024-10-07T11:39:51.425Z] Copying: 132/1024 [MB] (26 MBps) [2024-10-07T11:39:52.360Z] Copying: 158/1024 [MB] (26 MBps) [2024-10-07T11:39:53.314Z] Copying: 185/1024 [MB] (26 MBps) [2024-10-07T11:39:54.689Z] Copying: 212/1024 [MB] (27 MBps) [2024-10-07T11:39:55.623Z] Copying: 239/1024 [MB] (26 MBps) [2024-10-07T11:39:56.557Z] Copying: 266/1024 [MB] (27 MBps) [2024-10-07T11:39:57.491Z] Copying: 292/1024 [MB] (26 MBps) [2024-10-07T11:39:58.425Z] Copying: 319/1024 [MB] (26 MBps) [2024-10-07T11:39:59.359Z] Copying: 345/1024 [MB] (26 MBps) [2024-10-07T11:40:00.354Z] Copying: 372/1024 [MB] (26 MBps) [2024-10-07T11:40:01.288Z] Copying: 398/1024 [MB] (26 MBps) [2024-10-07T11:40:02.663Z] Copying: 425/1024 [MB] (26 MBps) [2024-10-07T11:40:03.598Z] Copying: 453/1024 [MB] (28 MBps) [2024-10-07T11:40:04.533Z] Copying: 480/1024 [MB] (27 MBps) [2024-10-07T11:40:05.467Z] Copying: 508/1024 [MB] (27 MBps) [2024-10-07T11:40:06.402Z] Copying: 536/1024 [MB] (27 MBps) [2024-10-07T11:40:07.350Z] Copying: 564/1024 [MB] (27 MBps) [2024-10-07T11:40:08.300Z] Copying: 592/1024 [MB] (27 MBps) [2024-10-07T11:40:09.675Z] Copying: 620/1024 [MB] (28 MBps) [2024-10-07T11:40:10.611Z] Copying: 648/1024 [MB] (28 MBps) [2024-10-07T11:40:11.545Z] Copying: 676/1024 [MB] (27 MBps) [2024-10-07T11:40:12.480Z] Copying: 703/1024 [MB] (26 MBps) [2024-10-07T11:40:13.423Z] Copying: 729/1024 [MB] (26 MBps) [2024-10-07T11:40:14.388Z] Copying: 756/1024 [MB] (27 MBps) [2024-10-07T11:40:15.323Z] Copying: 783/1024 [MB] (26 MBps) [2024-10-07T11:40:16.257Z] Copying: 810/1024 [MB] (26 MBps) [2024-10-07T11:40:17.634Z] Copying: 837/1024 [MB] (26 MBps) [2024-10-07T11:40:18.568Z] Copying: 863/1024 [MB] (26 MBps) [2024-10-07T11:40:19.502Z] Copying: 890/1024 [MB] (26 MBps) [2024-10-07T11:40:20.438Z] Copying: 918/1024 [MB] (27 MBps) [2024-10-07T11:40:21.373Z] Copying: 945/1024 [MB] (27 MBps) [2024-10-07T11:40:22.308Z] Copying: 973/1024 [MB] (28 MBps) [2024-10-07T11:40:23.242Z] Copying: 1001/1024 [MB] (27 MBps) [2024-10-07T11:40:23.809Z] Copying: 1023/1024 [MB] (21 MBps) [2024-10-07T11:40:23.809Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-10-07 11:40:23.798194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.098 [2024-10-07 11:40:23.798252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:42.098 [2024-10-07 11:40:23.798269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:42.098 [2024-10-07 11:40:23.798280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.098 [2024-10-07 11:40:23.799868] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:42.098 [2024-10-07 11:40:23.806154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.098 [2024-10-07 11:40:23.806196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:42.098 [2024-10-07 11:40:23.806212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.248 ms 00:29:42.098 [2024-10-07 11:40:23.806225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.356 [2024-10-07 11:40:23.815503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.356 [2024-10-07 11:40:23.815549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:42.356 [2024-10-07 11:40:23.815564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.405 ms 00:29:42.356 [2024-10-07 11:40:23.815575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.356 [2024-10-07 11:40:23.838653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.356 [2024-10-07 11:40:23.838702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:42.356 [2024-10-07 11:40:23.838717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.097 ms 00:29:42.356 [2024-10-07 11:40:23.838728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.356 [2024-10-07 11:40:23.843777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.356 [2024-10-07 11:40:23.843807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:42.356 [2024-10-07 11:40:23.843820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.012 ms 00:29:42.356 [2024-10-07 11:40:23.843830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.356 [2024-10-07 11:40:23.880850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.356 [2024-10-07 11:40:23.880887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:42.356 [2024-10-07 11:40:23.880901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.022 ms 00:29:42.356 [2024-10-07 11:40:23.880911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.356 [2024-10-07 11:40:23.901670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.356 [2024-10-07 11:40:23.901708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:42.356 [2024-10-07 11:40:23.901723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.755 ms 00:29:42.356 [2024-10-07 11:40:23.901751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.356 [2024-10-07 11:40:24.002983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.356 [2024-10-07 11:40:24.003041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:42.356 [2024-10-07 11:40:24.003057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 101.355 ms 00:29:42.356 [2024-10-07 11:40:24.003068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.356 [2024-10-07 11:40:24.040144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.356 [2024-10-07 11:40:24.040186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:42.356 [2024-10-07 11:40:24.040200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.118 ms 00:29:42.356 [2024-10-07 11:40:24.040210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.616 [2024-10-07 11:40:24.076507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.616 [2024-10-07 11:40:24.076573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:42.616 [2024-10-07 11:40:24.076587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.314 ms 00:29:42.616 [2024-10-07 11:40:24.076598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.616 [2024-10-07 11:40:24.112454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.616 [2024-10-07 11:40:24.112493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:42.616 [2024-10-07 11:40:24.112507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.874 ms 00:29:42.616 [2024-10-07 11:40:24.112516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.616 [2024-10-07 11:40:24.148570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.616 [2024-10-07 11:40:24.148609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:42.616 [2024-10-07 11:40:24.148623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.009 ms 00:29:42.616 [2024-10-07 11:40:24.148633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.616 [2024-10-07 11:40:24.148669] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:42.616 [2024-10-07 11:40:24.148685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 105472 / 261120 wr_cnt: 1 state: open 00:29:42.616 [2024-10-07 11:40:24.148698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:42.616 [2024-10-07 11:40:24.148710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:42.616 [2024-10-07 11:40:24.148721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:42.616 [2024-10-07 11:40:24.148732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:42.616 [2024-10-07 11:40:24.148752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:42.616 [2024-10-07 11:40:24.148763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:42.616 [2024-10-07 11:40:24.148774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:42.616 [2024-10-07 11:40:24.148785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:42.616 [2024-10-07 11:40:24.148796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:42.616 [2024-10-07 11:40:24.148807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:42.616 [2024-10-07 11:40:24.148817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:42.616 [2024-10-07 11:40:24.148828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:42.616 [2024-10-07 11:40:24.148839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:42.616 [2024-10-07 11:40:24.148849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:42.616 [2024-10-07 11:40:24.148860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:42.616 [2024-10-07 11:40:24.148870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:42.616 [2024-10-07 11:40:24.148880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:42.616 [2024-10-07 11:40:24.148890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:42.616 [2024-10-07 11:40:24.148901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:42.616 [2024-10-07 11:40:24.148911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:42.616 [2024-10-07 11:40:24.148922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:42.616 [2024-10-07 11:40:24.148932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:42.616 [2024-10-07 11:40:24.148942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:42.616 [2024-10-07 11:40:24.148953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:42.616 [2024-10-07 11:40:24.148963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:42.616 [2024-10-07 11:40:24.148973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:42.616 [2024-10-07 11:40:24.148984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:42.616 [2024-10-07 11:40:24.148995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:42.616 [2024-10-07 11:40:24.149006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:42.616 [2024-10-07 11:40:24.149017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:42.616 [2024-10-07 11:40:24.149027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:42.616 [2024-10-07 11:40:24.149038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:42.616 [2024-10-07 11:40:24.149049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:42.616 [2024-10-07 11:40:24.149059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:42.616 [2024-10-07 11:40:24.149070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:42.616 [2024-10-07 11:40:24.149081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:42.616 [2024-10-07 11:40:24.149092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:42.616 [2024-10-07 11:40:24.149103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:42.616 [2024-10-07 11:40:24.149114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:42.616 [2024-10-07 11:40:24.149124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:42.616 [2024-10-07 11:40:24.149136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:42.616 [2024-10-07 11:40:24.149146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:42.616 [2024-10-07 11:40:24.149157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:42.616 [2024-10-07 11:40:24.149168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:42.616 [2024-10-07 11:40:24.149179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:42.616 [2024-10-07 11:40:24.149189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:42.616 [2024-10-07 11:40:24.149200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:42.616 [2024-10-07 11:40:24.149211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:42.616 [2024-10-07 11:40:24.149221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:42.616 [2024-10-07 11:40:24.149232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:42.616 [2024-10-07 11:40:24.149243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:42.616 [2024-10-07 11:40:24.149253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:42.616 [2024-10-07 11:40:24.149264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:42.616 [2024-10-07 11:40:24.149275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:42.616 [2024-10-07 11:40:24.149285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:42.616 [2024-10-07 11:40:24.149296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:42.616 [2024-10-07 11:40:24.149306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:42.616 [2024-10-07 11:40:24.149316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:42.616 [2024-10-07 11:40:24.149327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:42.616 [2024-10-07 11:40:24.149338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:42.616 [2024-10-07 11:40:24.149348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:42.616 [2024-10-07 11:40:24.149359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:42.616 [2024-10-07 11:40:24.149369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:42.617 [2024-10-07 11:40:24.149380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:42.617 [2024-10-07 11:40:24.149391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:42.617 [2024-10-07 11:40:24.149402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:42.617 [2024-10-07 11:40:24.149412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:42.617 [2024-10-07 11:40:24.149423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:42.617 [2024-10-07 11:40:24.149434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:42.617 [2024-10-07 11:40:24.149445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:42.617 [2024-10-07 11:40:24.149455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:42.617 [2024-10-07 11:40:24.149466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:42.617 [2024-10-07 11:40:24.149476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:42.617 [2024-10-07 11:40:24.149487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:42.617 [2024-10-07 11:40:24.149498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:42.617 [2024-10-07 11:40:24.149508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:42.617 [2024-10-07 11:40:24.149518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:42.617 [2024-10-07 11:40:24.149528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:42.617 [2024-10-07 11:40:24.149539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:42.617 [2024-10-07 11:40:24.149549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:42.617 [2024-10-07 11:40:24.149559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:42.617 [2024-10-07 11:40:24.149570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:42.617 [2024-10-07 11:40:24.149580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:42.617 [2024-10-07 11:40:24.149591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:42.617 [2024-10-07 11:40:24.149601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:42.617 [2024-10-07 11:40:24.149612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:42.617 [2024-10-07 11:40:24.149622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:42.617 [2024-10-07 11:40:24.149632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:42.617 [2024-10-07 11:40:24.149642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:42.617 [2024-10-07 11:40:24.149653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:42.617 [2024-10-07 11:40:24.149664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:42.617 [2024-10-07 11:40:24.149675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:42.617 [2024-10-07 11:40:24.149685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:42.617 [2024-10-07 11:40:24.149696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:42.617 [2024-10-07 11:40:24.149706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:42.617 [2024-10-07 11:40:24.149717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:42.617 [2024-10-07 11:40:24.149729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:42.617 [2024-10-07 11:40:24.149746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:42.617 [2024-10-07 11:40:24.149757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:42.617 [2024-10-07 11:40:24.149775] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:42.617 [2024-10-07 11:40:24.149786] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f8a2eac3-8451-4d8a-b5b0-3d20daacab36 00:29:42.617 [2024-10-07 11:40:24.149799] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 105472 00:29:42.617 [2024-10-07 11:40:24.149809] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 106432 00:29:42.617 [2024-10-07 11:40:24.149819] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 105472 00:29:42.617 [2024-10-07 11:40:24.149829] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0091 00:29:42.617 [2024-10-07 11:40:24.149839] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:42.617 [2024-10-07 11:40:24.149849] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:42.617 [2024-10-07 11:40:24.149872] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:42.617 [2024-10-07 11:40:24.149881] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:42.617 [2024-10-07 11:40:24.149891] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:42.617 [2024-10-07 11:40:24.149901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.617 [2024-10-07 11:40:24.149914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:42.617 [2024-10-07 11:40:24.149925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.235 ms 00:29:42.617 [2024-10-07 11:40:24.149935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.617 [2024-10-07 11:40:24.169511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.617 [2024-10-07 11:40:24.169546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:42.617 [2024-10-07 11:40:24.169559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.574 ms 00:29:42.617 [2024-10-07 11:40:24.169569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.617 [2024-10-07 11:40:24.170057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.617 [2024-10-07 11:40:24.170075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:42.617 [2024-10-07 11:40:24.170086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.463 ms 00:29:42.617 [2024-10-07 11:40:24.170096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.617 [2024-10-07 11:40:24.214531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:42.617 [2024-10-07 11:40:24.214580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:42.617 [2024-10-07 11:40:24.214594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:42.617 [2024-10-07 11:40:24.214611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.617 [2024-10-07 11:40:24.214678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:42.617 [2024-10-07 11:40:24.214690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:42.617 [2024-10-07 11:40:24.214701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:42.617 [2024-10-07 11:40:24.214712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.617 [2024-10-07 11:40:24.214822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:42.617 [2024-10-07 11:40:24.214836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:42.617 [2024-10-07 11:40:24.214848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:42.617 [2024-10-07 11:40:24.214858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.617 [2024-10-07 11:40:24.214880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:42.617 [2024-10-07 11:40:24.214891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:42.617 [2024-10-07 11:40:24.214901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:42.617 [2024-10-07 11:40:24.214911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.875 [2024-10-07 11:40:24.340434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:42.875 [2024-10-07 11:40:24.340503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:42.875 [2024-10-07 11:40:24.340520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:42.875 [2024-10-07 11:40:24.340531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.875 [2024-10-07 11:40:24.442408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:42.876 [2024-10-07 11:40:24.442471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:42.876 [2024-10-07 11:40:24.442487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:42.876 [2024-10-07 11:40:24.442498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.876 [2024-10-07 11:40:24.442610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:42.876 [2024-10-07 11:40:24.442622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:42.876 [2024-10-07 11:40:24.442632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:42.876 [2024-10-07 11:40:24.442643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.876 [2024-10-07 11:40:24.442694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:42.876 [2024-10-07 11:40:24.442711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:42.876 [2024-10-07 11:40:24.442722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:42.876 [2024-10-07 11:40:24.442732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.876 [2024-10-07 11:40:24.442870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:42.876 [2024-10-07 11:40:24.442884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:42.876 [2024-10-07 11:40:24.442895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:42.876 [2024-10-07 11:40:24.442905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.876 [2024-10-07 11:40:24.442943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:42.876 [2024-10-07 11:40:24.442957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:42.876 [2024-10-07 11:40:24.442972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:42.876 [2024-10-07 11:40:24.442983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.876 [2024-10-07 11:40:24.443023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:42.876 [2024-10-07 11:40:24.443034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:42.876 [2024-10-07 11:40:24.443044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:42.876 [2024-10-07 11:40:24.443055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.876 [2024-10-07 11:40:24.443098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:42.876 [2024-10-07 11:40:24.443113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:42.876 [2024-10-07 11:40:24.443123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:42.876 [2024-10-07 11:40:24.443134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.876 [2024-10-07 11:40:24.443249] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 646.823 ms, result 0 00:29:44.774 00:29:44.774 00:29:44.774 11:40:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:29:46.675 11:40:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:46.675 [2024-10-07 11:40:28.038882] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:29:46.675 [2024-10-07 11:40:28.039007] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80363 ] 00:29:46.675 [2024-10-07 11:40:28.208987] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:46.933 [2024-10-07 11:40:28.420280] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:29:47.192 [2024-10-07 11:40:28.774836] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:47.192 [2024-10-07 11:40:28.774909] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:47.451 [2024-10-07 11:40:28.936496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.451 [2024-10-07 11:40:28.936554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:47.451 [2024-10-07 11:40:28.936571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:47.451 [2024-10-07 11:40:28.936582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.451 [2024-10-07 11:40:28.936635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.451 [2024-10-07 11:40:28.936647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:47.451 [2024-10-07 11:40:28.936658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:29:47.451 [2024-10-07 11:40:28.936668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.451 [2024-10-07 11:40:28.936690] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:47.451 [2024-10-07 11:40:28.937709] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:47.451 [2024-10-07 11:40:28.937737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.451 [2024-10-07 11:40:28.937763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:47.451 [2024-10-07 11:40:28.937774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.054 ms 00:29:47.451 [2024-10-07 11:40:28.937785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.451 [2024-10-07 11:40:28.939205] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:47.451 [2024-10-07 11:40:28.957778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.451 [2024-10-07 11:40:28.957820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:47.451 [2024-10-07 11:40:28.957835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.603 ms 00:29:47.451 [2024-10-07 11:40:28.957846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.451 [2024-10-07 11:40:28.957908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.451 [2024-10-07 11:40:28.957937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:47.451 [2024-10-07 11:40:28.957950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:29:47.451 [2024-10-07 11:40:28.957961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.451 [2024-10-07 11:40:28.964632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.451 [2024-10-07 11:40:28.964661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:47.451 [2024-10-07 11:40:28.964673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.610 ms 00:29:47.451 [2024-10-07 11:40:28.964684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.451 [2024-10-07 11:40:28.964773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.451 [2024-10-07 11:40:28.964788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:47.451 [2024-10-07 11:40:28.964799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:29:47.451 [2024-10-07 11:40:28.964809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.451 [2024-10-07 11:40:28.964856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.451 [2024-10-07 11:40:28.964868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:47.451 [2024-10-07 11:40:28.964880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:29:47.451 [2024-10-07 11:40:28.964890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.452 [2024-10-07 11:40:28.964915] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:47.452 [2024-10-07 11:40:28.969671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.452 [2024-10-07 11:40:28.969704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:47.452 [2024-10-07 11:40:28.969716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.770 ms 00:29:47.452 [2024-10-07 11:40:28.969726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.452 [2024-10-07 11:40:28.969800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.452 [2024-10-07 11:40:28.969817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:47.452 [2024-10-07 11:40:28.969829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:29:47.452 [2024-10-07 11:40:28.969839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.452 [2024-10-07 11:40:28.969902] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:47.452 [2024-10-07 11:40:28.969925] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:47.452 [2024-10-07 11:40:28.969961] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:47.452 [2024-10-07 11:40:28.969979] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:47.452 [2024-10-07 11:40:28.970070] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:47.452 [2024-10-07 11:40:28.970084] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:47.452 [2024-10-07 11:40:28.970097] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:47.452 [2024-10-07 11:40:28.970114] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:47.452 [2024-10-07 11:40:28.970126] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:47.452 [2024-10-07 11:40:28.970138] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:47.452 [2024-10-07 11:40:28.970148] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:47.452 [2024-10-07 11:40:28.970158] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:47.452 [2024-10-07 11:40:28.970168] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:47.452 [2024-10-07 11:40:28.970178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.452 [2024-10-07 11:40:28.970189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:47.452 [2024-10-07 11:40:28.970199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:29:47.452 [2024-10-07 11:40:28.970208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.452 [2024-10-07 11:40:28.970283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.452 [2024-10-07 11:40:28.970306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:47.452 [2024-10-07 11:40:28.970318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:29:47.452 [2024-10-07 11:40:28.970327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.452 [2024-10-07 11:40:28.970420] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:47.452 [2024-10-07 11:40:28.970438] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:47.452 [2024-10-07 11:40:28.970449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:47.452 [2024-10-07 11:40:28.970459] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:47.452 [2024-10-07 11:40:28.970470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:47.452 [2024-10-07 11:40:28.970479] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:47.452 [2024-10-07 11:40:28.970489] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:47.452 [2024-10-07 11:40:28.970498] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:47.452 [2024-10-07 11:40:28.970508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:47.452 [2024-10-07 11:40:28.970517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:47.452 [2024-10-07 11:40:28.970527] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:47.452 [2024-10-07 11:40:28.970537] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:47.452 [2024-10-07 11:40:28.970546] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:47.452 [2024-10-07 11:40:28.970565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:47.452 [2024-10-07 11:40:28.970575] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:47.452 [2024-10-07 11:40:28.970584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:47.452 [2024-10-07 11:40:28.970593] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:47.452 [2024-10-07 11:40:28.970602] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:47.452 [2024-10-07 11:40:28.970611] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:47.452 [2024-10-07 11:40:28.970620] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:47.452 [2024-10-07 11:40:28.970630] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:47.452 [2024-10-07 11:40:28.970639] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:47.452 [2024-10-07 11:40:28.970649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:47.452 [2024-10-07 11:40:28.970659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:47.452 [2024-10-07 11:40:28.970668] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:47.452 [2024-10-07 11:40:28.970677] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:47.452 [2024-10-07 11:40:28.970685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:47.452 [2024-10-07 11:40:28.970694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:47.452 [2024-10-07 11:40:28.970703] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:47.452 [2024-10-07 11:40:28.970712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:47.452 [2024-10-07 11:40:28.970722] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:47.452 [2024-10-07 11:40:28.970730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:47.452 [2024-10-07 11:40:28.970753] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:47.452 [2024-10-07 11:40:28.970763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:47.452 [2024-10-07 11:40:28.970772] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:47.452 [2024-10-07 11:40:28.970781] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:47.452 [2024-10-07 11:40:28.970790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:47.452 [2024-10-07 11:40:28.970799] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:47.452 [2024-10-07 11:40:28.970809] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:47.452 [2024-10-07 11:40:28.970818] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:47.452 [2024-10-07 11:40:28.970827] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:47.452 [2024-10-07 11:40:28.970836] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:47.452 [2024-10-07 11:40:28.970845] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:47.452 [2024-10-07 11:40:28.970854] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:47.452 [2024-10-07 11:40:28.970865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:47.452 [2024-10-07 11:40:28.970878] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:47.452 [2024-10-07 11:40:28.970888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:47.452 [2024-10-07 11:40:28.970898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:47.452 [2024-10-07 11:40:28.970908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:47.452 [2024-10-07 11:40:28.970917] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:47.452 [2024-10-07 11:40:28.970926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:47.452 [2024-10-07 11:40:28.970935] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:47.452 [2024-10-07 11:40:28.970945] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:47.452 [2024-10-07 11:40:28.970955] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:47.452 [2024-10-07 11:40:28.970968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:47.452 [2024-10-07 11:40:28.970979] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:47.452 [2024-10-07 11:40:28.970990] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:47.452 [2024-10-07 11:40:28.971000] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:47.452 [2024-10-07 11:40:28.971011] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:47.452 [2024-10-07 11:40:28.971021] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:47.452 [2024-10-07 11:40:28.971032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:47.452 [2024-10-07 11:40:28.971042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:47.452 [2024-10-07 11:40:28.971052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:47.452 [2024-10-07 11:40:28.971063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:47.452 [2024-10-07 11:40:28.971073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:47.452 [2024-10-07 11:40:28.971083] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:47.452 [2024-10-07 11:40:28.971093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:47.452 [2024-10-07 11:40:28.971103] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:47.452 [2024-10-07 11:40:28.971113] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:47.452 [2024-10-07 11:40:28.971123] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:47.452 [2024-10-07 11:40:28.971134] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:47.452 [2024-10-07 11:40:28.971146] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:47.452 [2024-10-07 11:40:28.971156] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:47.453 [2024-10-07 11:40:28.971166] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:47.453 [2024-10-07 11:40:28.971177] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:47.453 [2024-10-07 11:40:28.971188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.453 [2024-10-07 11:40:28.971199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:47.453 [2024-10-07 11:40:28.971209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.823 ms 00:29:47.453 [2024-10-07 11:40:28.971219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.453 [2024-10-07 11:40:29.021618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.453 [2024-10-07 11:40:29.021661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:47.453 [2024-10-07 11:40:29.021676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.430 ms 00:29:47.453 [2024-10-07 11:40:29.021687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.453 [2024-10-07 11:40:29.021797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.453 [2024-10-07 11:40:29.021809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:47.453 [2024-10-07 11:40:29.021820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:29:47.453 [2024-10-07 11:40:29.021830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.453 [2024-10-07 11:40:29.068103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.453 [2024-10-07 11:40:29.068141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:47.453 [2024-10-07 11:40:29.068159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.278 ms 00:29:47.453 [2024-10-07 11:40:29.068170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.453 [2024-10-07 11:40:29.068218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.453 [2024-10-07 11:40:29.068230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:47.453 [2024-10-07 11:40:29.068241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:47.453 [2024-10-07 11:40:29.068251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.453 [2024-10-07 11:40:29.068732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.453 [2024-10-07 11:40:29.068761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:47.453 [2024-10-07 11:40:29.068772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.403 ms 00:29:47.453 [2024-10-07 11:40:29.068789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.453 [2024-10-07 11:40:29.068908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.453 [2024-10-07 11:40:29.068925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:47.453 [2024-10-07 11:40:29.068936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:29:47.453 [2024-10-07 11:40:29.068945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.453 [2024-10-07 11:40:29.085942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.453 [2024-10-07 11:40:29.085990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:47.453 [2024-10-07 11:40:29.086019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.002 ms 00:29:47.453 [2024-10-07 11:40:29.086030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.453 [2024-10-07 11:40:29.104999] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:29:47.453 [2024-10-07 11:40:29.105038] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:47.453 [2024-10-07 11:40:29.105055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.453 [2024-10-07 11:40:29.105066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:47.453 [2024-10-07 11:40:29.105078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.938 ms 00:29:47.453 [2024-10-07 11:40:29.105087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.453 [2024-10-07 11:40:29.135038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.453 [2024-10-07 11:40:29.135081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:47.453 [2024-10-07 11:40:29.135095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.951 ms 00:29:47.453 [2024-10-07 11:40:29.135106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.453 [2024-10-07 11:40:29.153879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.453 [2024-10-07 11:40:29.153916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:47.453 [2024-10-07 11:40:29.153929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.760 ms 00:29:47.453 [2024-10-07 11:40:29.153940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.711 [2024-10-07 11:40:29.172603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.711 [2024-10-07 11:40:29.172655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:47.711 [2024-10-07 11:40:29.172669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.655 ms 00:29:47.711 [2024-10-07 11:40:29.172679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.711 [2024-10-07 11:40:29.173501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.711 [2024-10-07 11:40:29.173533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:47.711 [2024-10-07 11:40:29.173545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.698 ms 00:29:47.711 [2024-10-07 11:40:29.173556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.711 [2024-10-07 11:40:29.259083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.711 [2024-10-07 11:40:29.259150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:47.711 [2024-10-07 11:40:29.259166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.643 ms 00:29:47.711 [2024-10-07 11:40:29.259178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.711 [2024-10-07 11:40:29.270041] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:47.711 [2024-10-07 11:40:29.272806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.711 [2024-10-07 11:40:29.272835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:47.711 [2024-10-07 11:40:29.272864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.597 ms 00:29:47.711 [2024-10-07 11:40:29.272879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.711 [2024-10-07 11:40:29.272967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.711 [2024-10-07 11:40:29.272980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:47.711 [2024-10-07 11:40:29.272992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:47.711 [2024-10-07 11:40:29.273003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.711 [2024-10-07 11:40:29.274492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.711 [2024-10-07 11:40:29.274530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:47.711 [2024-10-07 11:40:29.274543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.428 ms 00:29:47.711 [2024-10-07 11:40:29.274553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.711 [2024-10-07 11:40:29.274593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.711 [2024-10-07 11:40:29.274605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:47.711 [2024-10-07 11:40:29.274615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:47.711 [2024-10-07 11:40:29.274625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.711 [2024-10-07 11:40:29.274661] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:47.711 [2024-10-07 11:40:29.274673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.711 [2024-10-07 11:40:29.274684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:47.711 [2024-10-07 11:40:29.274694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:29:47.711 [2024-10-07 11:40:29.274708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.711 [2024-10-07 11:40:29.311536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.711 [2024-10-07 11:40:29.311576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:47.711 [2024-10-07 11:40:29.311607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.868 ms 00:29:47.711 [2024-10-07 11:40:29.311617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.712 [2024-10-07 11:40:29.311692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.712 [2024-10-07 11:40:29.311704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:47.712 [2024-10-07 11:40:29.311715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:29:47.712 [2024-10-07 11:40:29.311725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.712 [2024-10-07 11:40:29.312794] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 376.433 ms, result 0 00:29:49.088  [2024-10-07T11:40:31.736Z] Copying: 1276/1048576 [kB] (1276 kBps) [2024-10-07T11:40:32.671Z] Copying: 10108/1048576 [kB] (8832 kBps) [2024-10-07T11:40:33.607Z] Copying: 45/1024 [MB] (35 MBps) [2024-10-07T11:40:34.551Z] Copying: 79/1024 [MB] (34 MBps) [2024-10-07T11:40:35.924Z] Copying: 114/1024 [MB] (35 MBps) [2024-10-07T11:40:36.859Z] Copying: 150/1024 [MB] (35 MBps) [2024-10-07T11:40:37.794Z] Copying: 185/1024 [MB] (35 MBps) [2024-10-07T11:40:38.730Z] Copying: 221/1024 [MB] (35 MBps) [2024-10-07T11:40:39.665Z] Copying: 255/1024 [MB] (34 MBps) [2024-10-07T11:40:40.601Z] Copying: 290/1024 [MB] (34 MBps) [2024-10-07T11:40:41.538Z] Copying: 324/1024 [MB] (33 MBps) [2024-10-07T11:40:42.913Z] Copying: 359/1024 [MB] (35 MBps) [2024-10-07T11:40:43.850Z] Copying: 394/1024 [MB] (34 MBps) [2024-10-07T11:40:44.784Z] Copying: 429/1024 [MB] (35 MBps) [2024-10-07T11:40:45.719Z] Copying: 465/1024 [MB] (35 MBps) [2024-10-07T11:40:46.655Z] Copying: 499/1024 [MB] (34 MBps) [2024-10-07T11:40:47.609Z] Copying: 536/1024 [MB] (36 MBps) [2024-10-07T11:40:48.552Z] Copying: 572/1024 [MB] (36 MBps) [2024-10-07T11:40:49.928Z] Copying: 608/1024 [MB] (35 MBps) [2024-10-07T11:40:50.864Z] Copying: 645/1024 [MB] (36 MBps) [2024-10-07T11:40:51.800Z] Copying: 681/1024 [MB] (36 MBps) [2024-10-07T11:40:52.736Z] Copying: 718/1024 [MB] (36 MBps) [2024-10-07T11:40:53.672Z] Copying: 754/1024 [MB] (36 MBps) [2024-10-07T11:40:54.608Z] Copying: 791/1024 [MB] (36 MBps) [2024-10-07T11:40:55.545Z] Copying: 826/1024 [MB] (35 MBps) [2024-10-07T11:40:56.923Z] Copying: 862/1024 [MB] (35 MBps) [2024-10-07T11:40:57.493Z] Copying: 898/1024 [MB] (35 MBps) [2024-10-07T11:40:58.869Z] Copying: 933/1024 [MB] (35 MBps) [2024-10-07T11:40:59.804Z] Copying: 968/1024 [MB] (35 MBps) [2024-10-07T11:41:00.370Z] Copying: 1003/1024 [MB] (34 MBps) [2024-10-07T11:41:01.808Z] Copying: 1024/1024 [MB] (average 33 MBps)[2024-10-07 11:41:01.739202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:20.097 [2024-10-07 11:41:01.739272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:20.097 [2024-10-07 11:41:01.739291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:20.097 [2024-10-07 11:41:01.739303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:20.097 [2024-10-07 11:41:01.739336] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:20.097 [2024-10-07 11:41:01.743794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:20.097 [2024-10-07 11:41:01.743840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:20.097 [2024-10-07 11:41:01.743853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.442 ms 00:30:20.097 [2024-10-07 11:41:01.743864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:20.097 [2024-10-07 11:41:01.744074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:20.097 [2024-10-07 11:41:01.744087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:20.097 [2024-10-07 11:41:01.744098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.177 ms 00:30:20.097 [2024-10-07 11:41:01.744109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:20.097 [2024-10-07 11:41:01.756048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:20.097 [2024-10-07 11:41:01.756093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:20.097 [2024-10-07 11:41:01.756109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.939 ms 00:30:20.097 [2024-10-07 11:41:01.756128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:20.097 [2024-10-07 11:41:01.761430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:20.097 [2024-10-07 11:41:01.761466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:20.097 [2024-10-07 11:41:01.761478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.274 ms 00:30:20.097 [2024-10-07 11:41:01.761489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:20.097 [2024-10-07 11:41:01.800160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:20.097 [2024-10-07 11:41:01.800207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:20.097 [2024-10-07 11:41:01.800222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.640 ms 00:30:20.097 [2024-10-07 11:41:01.800233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:20.357 [2024-10-07 11:41:01.821432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:20.357 [2024-10-07 11:41:01.821477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:20.357 [2024-10-07 11:41:01.821493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.190 ms 00:30:20.357 [2024-10-07 11:41:01.821504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:20.357 [2024-10-07 11:41:01.823512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:20.357 [2024-10-07 11:41:01.823551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:20.357 [2024-10-07 11:41:01.823566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.960 ms 00:30:20.357 [2024-10-07 11:41:01.823576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:20.357 [2024-10-07 11:41:01.860021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:20.357 [2024-10-07 11:41:01.860057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:20.357 [2024-10-07 11:41:01.860071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.486 ms 00:30:20.357 [2024-10-07 11:41:01.860082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:20.357 [2024-10-07 11:41:01.895937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:20.357 [2024-10-07 11:41:01.895971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:20.357 [2024-10-07 11:41:01.895984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.876 ms 00:30:20.357 [2024-10-07 11:41:01.895994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:20.357 [2024-10-07 11:41:01.931513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:20.357 [2024-10-07 11:41:01.931562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:20.357 [2024-10-07 11:41:01.931575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.525 ms 00:30:20.357 [2024-10-07 11:41:01.931584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:20.357 [2024-10-07 11:41:01.967789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:20.357 [2024-10-07 11:41:01.967825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:20.357 [2024-10-07 11:41:01.967839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.187 ms 00:30:20.357 [2024-10-07 11:41:01.967849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:20.357 [2024-10-07 11:41:01.967884] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:20.357 [2024-10-07 11:41:01.967901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:20.357 [2024-10-07 11:41:01.967920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:30:20.357 [2024-10-07 11:41:01.967932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:20.357 [2024-10-07 11:41:01.967943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:20.357 [2024-10-07 11:41:01.967954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:20.357 [2024-10-07 11:41:01.967965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:20.357 [2024-10-07 11:41:01.967975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:20.357 [2024-10-07 11:41:01.967986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:20.357 [2024-10-07 11:41:01.967996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:20.357 [2024-10-07 11:41:01.968007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:20.357 [2024-10-07 11:41:01.968017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:20.357 [2024-10-07 11:41:01.968028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:20.357 [2024-10-07 11:41:01.968038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:20.357 [2024-10-07 11:41:01.968049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:20.357 [2024-10-07 11:41:01.968059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:20.357 [2024-10-07 11:41:01.968069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:20.357 [2024-10-07 11:41:01.968084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:20.357 [2024-10-07 11:41:01.968095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:20.357 [2024-10-07 11:41:01.968105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:20.357 [2024-10-07 11:41:01.968116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:20.357 [2024-10-07 11:41:01.968126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:20.358 [2024-10-07 11:41:01.968990] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:20.358 [2024-10-07 11:41:01.969000] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f8a2eac3-8451-4d8a-b5b0-3d20daacab36 00:30:20.358 [2024-10-07 11:41:01.969012] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:30:20.358 [2024-10-07 11:41:01.969022] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 159168 00:30:20.358 [2024-10-07 11:41:01.969032] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 157184 00:30:20.358 [2024-10-07 11:41:01.969042] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0126 00:30:20.358 [2024-10-07 11:41:01.969052] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:20.358 [2024-10-07 11:41:01.969064] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:20.358 [2024-10-07 11:41:01.969074] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:20.358 [2024-10-07 11:41:01.969082] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:20.358 [2024-10-07 11:41:01.969092] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:20.358 [2024-10-07 11:41:01.969101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:20.358 [2024-10-07 11:41:01.969111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:20.358 [2024-10-07 11:41:01.969132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.220 ms 00:30:20.358 [2024-10-07 11:41:01.969146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:20.358 [2024-10-07 11:41:01.989250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:20.358 [2024-10-07 11:41:01.989282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:20.359 [2024-10-07 11:41:01.989296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.102 ms 00:30:20.359 [2024-10-07 11:41:01.989307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:20.359 [2024-10-07 11:41:01.989830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:20.359 [2024-10-07 11:41:01.989853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:20.359 [2024-10-07 11:41:01.989864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.504 ms 00:30:20.359 [2024-10-07 11:41:01.989875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:20.359 [2024-10-07 11:41:02.035096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:20.359 [2024-10-07 11:41:02.035133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:20.359 [2024-10-07 11:41:02.035147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:20.359 [2024-10-07 11:41:02.035157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:20.359 [2024-10-07 11:41:02.035212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:20.359 [2024-10-07 11:41:02.035229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:20.359 [2024-10-07 11:41:02.035240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:20.359 [2024-10-07 11:41:02.035250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:20.359 [2024-10-07 11:41:02.035313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:20.359 [2024-10-07 11:41:02.035326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:20.359 [2024-10-07 11:41:02.035336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:20.359 [2024-10-07 11:41:02.035347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:20.359 [2024-10-07 11:41:02.035363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:20.359 [2024-10-07 11:41:02.035374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:20.359 [2024-10-07 11:41:02.035389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:20.359 [2024-10-07 11:41:02.035399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:20.618 [2024-10-07 11:41:02.161442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:20.618 [2024-10-07 11:41:02.161500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:20.618 [2024-10-07 11:41:02.161515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:20.618 [2024-10-07 11:41:02.161527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:20.618 [2024-10-07 11:41:02.263999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:20.618 [2024-10-07 11:41:02.264069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:20.618 [2024-10-07 11:41:02.264083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:20.618 [2024-10-07 11:41:02.264093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:20.618 [2024-10-07 11:41:02.264200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:20.618 [2024-10-07 11:41:02.264220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:20.618 [2024-10-07 11:41:02.264231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:20.618 [2024-10-07 11:41:02.264241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:20.618 [2024-10-07 11:41:02.264291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:20.618 [2024-10-07 11:41:02.264303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:20.618 [2024-10-07 11:41:02.264313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:20.618 [2024-10-07 11:41:02.264327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:20.618 [2024-10-07 11:41:02.264481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:20.618 [2024-10-07 11:41:02.264495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:20.618 [2024-10-07 11:41:02.264506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:20.618 [2024-10-07 11:41:02.264516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:20.618 [2024-10-07 11:41:02.264551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:20.618 [2024-10-07 11:41:02.264569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:20.618 [2024-10-07 11:41:02.264580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:20.618 [2024-10-07 11:41:02.264590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:20.618 [2024-10-07 11:41:02.264634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:20.618 [2024-10-07 11:41:02.264645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:20.618 [2024-10-07 11:41:02.264655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:20.618 [2024-10-07 11:41:02.264665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:20.618 [2024-10-07 11:41:02.264708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:20.618 [2024-10-07 11:41:02.264719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:20.618 [2024-10-07 11:41:02.264730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:20.618 [2024-10-07 11:41:02.264762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:20.618 [2024-10-07 11:41:02.264883] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 526.506 ms, result 0 00:30:21.996 00:30:21.996 00:30:21.996 11:41:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:23.900 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:30:23.900 11:41:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:23.900 [2024-10-07 11:41:05.332958] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:30:23.900 [2024-10-07 11:41:05.333213] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80745 ] 00:30:23.900 [2024-10-07 11:41:05.507202] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:24.159 [2024-10-07 11:41:05.723483] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:30:24.418 [2024-10-07 11:41:06.078412] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:24.418 [2024-10-07 11:41:06.078484] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:24.678 [2024-10-07 11:41:06.240076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.678 [2024-10-07 11:41:06.240123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:24.678 [2024-10-07 11:41:06.240139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:24.678 [2024-10-07 11:41:06.240150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.678 [2024-10-07 11:41:06.240204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.678 [2024-10-07 11:41:06.240216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:24.678 [2024-10-07 11:41:06.240227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:30:24.678 [2024-10-07 11:41:06.240237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.678 [2024-10-07 11:41:06.240258] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:24.678 [2024-10-07 11:41:06.241271] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:24.678 [2024-10-07 11:41:06.241300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.678 [2024-10-07 11:41:06.241311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:24.678 [2024-10-07 11:41:06.241323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.048 ms 00:30:24.678 [2024-10-07 11:41:06.241332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.678 [2024-10-07 11:41:06.242764] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:30:24.678 [2024-10-07 11:41:06.260944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.678 [2024-10-07 11:41:06.260986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:24.678 [2024-10-07 11:41:06.261001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.210 ms 00:30:24.678 [2024-10-07 11:41:06.261012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.678 [2024-10-07 11:41:06.261072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.678 [2024-10-07 11:41:06.261086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:24.678 [2024-10-07 11:41:06.261097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:30:24.678 [2024-10-07 11:41:06.261107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.678 [2024-10-07 11:41:06.267828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.678 [2024-10-07 11:41:06.267857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:24.678 [2024-10-07 11:41:06.267869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.660 ms 00:30:24.678 [2024-10-07 11:41:06.267880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.678 [2024-10-07 11:41:06.267957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.678 [2024-10-07 11:41:06.267971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:24.678 [2024-10-07 11:41:06.267982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:30:24.678 [2024-10-07 11:41:06.267993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.678 [2024-10-07 11:41:06.268037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.678 [2024-10-07 11:41:06.268049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:24.678 [2024-10-07 11:41:06.268059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:30:24.678 [2024-10-07 11:41:06.268069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.678 [2024-10-07 11:41:06.268094] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:24.678 [2024-10-07 11:41:06.272869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.678 [2024-10-07 11:41:06.272899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:24.678 [2024-10-07 11:41:06.272912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.789 ms 00:30:24.678 [2024-10-07 11:41:06.272922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.678 [2024-10-07 11:41:06.272953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.678 [2024-10-07 11:41:06.272964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:24.678 [2024-10-07 11:41:06.272975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:30:24.678 [2024-10-07 11:41:06.272985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.678 [2024-10-07 11:41:06.273042] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:24.678 [2024-10-07 11:41:06.273066] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:24.678 [2024-10-07 11:41:06.273101] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:24.678 [2024-10-07 11:41:06.273120] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:24.678 [2024-10-07 11:41:06.273209] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:24.678 [2024-10-07 11:41:06.273222] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:24.678 [2024-10-07 11:41:06.273235] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:24.678 [2024-10-07 11:41:06.273251] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:24.678 [2024-10-07 11:41:06.273263] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:24.678 [2024-10-07 11:41:06.273275] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:24.678 [2024-10-07 11:41:06.273285] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:24.678 [2024-10-07 11:41:06.273295] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:24.678 [2024-10-07 11:41:06.273306] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:24.678 [2024-10-07 11:41:06.273317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.678 [2024-10-07 11:41:06.273326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:24.678 [2024-10-07 11:41:06.273337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:30:24.678 [2024-10-07 11:41:06.273347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.679 [2024-10-07 11:41:06.273418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.679 [2024-10-07 11:41:06.273432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:24.679 [2024-10-07 11:41:06.273443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:30:24.679 [2024-10-07 11:41:06.273452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.679 [2024-10-07 11:41:06.273546] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:24.679 [2024-10-07 11:41:06.273565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:24.679 [2024-10-07 11:41:06.273576] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:24.679 [2024-10-07 11:41:06.273586] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:24.679 [2024-10-07 11:41:06.273597] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:24.679 [2024-10-07 11:41:06.273607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:24.679 [2024-10-07 11:41:06.273618] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:24.679 [2024-10-07 11:41:06.273627] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:24.679 [2024-10-07 11:41:06.273636] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:24.679 [2024-10-07 11:41:06.273645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:24.679 [2024-10-07 11:41:06.273656] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:24.679 [2024-10-07 11:41:06.273666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:24.679 [2024-10-07 11:41:06.273675] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:24.679 [2024-10-07 11:41:06.273694] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:24.679 [2024-10-07 11:41:06.273705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:24.679 [2024-10-07 11:41:06.273715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:24.679 [2024-10-07 11:41:06.273724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:24.679 [2024-10-07 11:41:06.273733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:24.679 [2024-10-07 11:41:06.273758] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:24.679 [2024-10-07 11:41:06.273768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:24.679 [2024-10-07 11:41:06.273777] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:24.679 [2024-10-07 11:41:06.273786] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:24.679 [2024-10-07 11:41:06.273795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:24.679 [2024-10-07 11:41:06.273804] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:24.679 [2024-10-07 11:41:06.273814] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:24.679 [2024-10-07 11:41:06.273823] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:24.679 [2024-10-07 11:41:06.273832] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:24.679 [2024-10-07 11:41:06.273841] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:24.679 [2024-10-07 11:41:06.273850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:24.679 [2024-10-07 11:41:06.273859] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:24.679 [2024-10-07 11:41:06.273869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:24.679 [2024-10-07 11:41:06.273878] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:24.679 [2024-10-07 11:41:06.273887] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:24.679 [2024-10-07 11:41:06.273896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:24.679 [2024-10-07 11:41:06.273905] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:24.679 [2024-10-07 11:41:06.273914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:24.679 [2024-10-07 11:41:06.273923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:24.679 [2024-10-07 11:41:06.273932] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:24.679 [2024-10-07 11:41:06.273942] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:24.679 [2024-10-07 11:41:06.273951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:24.679 [2024-10-07 11:41:06.273960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:24.679 [2024-10-07 11:41:06.273970] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:24.679 [2024-10-07 11:41:06.273979] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:24.679 [2024-10-07 11:41:06.273988] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:24.679 [2024-10-07 11:41:06.273998] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:24.679 [2024-10-07 11:41:06.274012] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:24.679 [2024-10-07 11:41:06.274021] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:24.679 [2024-10-07 11:41:06.274031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:24.679 [2024-10-07 11:41:06.274041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:24.679 [2024-10-07 11:41:06.274050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:24.679 [2024-10-07 11:41:06.274060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:24.679 [2024-10-07 11:41:06.274069] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:24.679 [2024-10-07 11:41:06.274079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:24.679 [2024-10-07 11:41:06.274089] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:24.679 [2024-10-07 11:41:06.274102] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:24.679 [2024-10-07 11:41:06.274114] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:24.679 [2024-10-07 11:41:06.274124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:24.679 [2024-10-07 11:41:06.274134] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:24.679 [2024-10-07 11:41:06.274144] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:24.679 [2024-10-07 11:41:06.274154] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:24.679 [2024-10-07 11:41:06.274165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:24.679 [2024-10-07 11:41:06.274175] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:24.679 [2024-10-07 11:41:06.274185] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:24.679 [2024-10-07 11:41:06.274195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:24.679 [2024-10-07 11:41:06.274205] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:24.679 [2024-10-07 11:41:06.274215] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:24.679 [2024-10-07 11:41:06.274225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:24.679 [2024-10-07 11:41:06.274235] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:24.679 [2024-10-07 11:41:06.274246] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:24.679 [2024-10-07 11:41:06.274256] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:24.679 [2024-10-07 11:41:06.274267] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:24.679 [2024-10-07 11:41:06.274278] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:24.679 [2024-10-07 11:41:06.274298] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:24.679 [2024-10-07 11:41:06.274308] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:24.679 [2024-10-07 11:41:06.274319] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:24.679 [2024-10-07 11:41:06.274330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.679 [2024-10-07 11:41:06.274341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:24.679 [2024-10-07 11:41:06.274351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.840 ms 00:30:24.679 [2024-10-07 11:41:06.274361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.679 [2024-10-07 11:41:06.323479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.679 [2024-10-07 11:41:06.323521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:24.679 [2024-10-07 11:41:06.323536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.145 ms 00:30:24.679 [2024-10-07 11:41:06.323547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.679 [2024-10-07 11:41:06.323637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.679 [2024-10-07 11:41:06.323648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:24.679 [2024-10-07 11:41:06.323659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:30:24.679 [2024-10-07 11:41:06.323669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.679 [2024-10-07 11:41:06.370649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.679 [2024-10-07 11:41:06.370686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:24.679 [2024-10-07 11:41:06.370703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.988 ms 00:30:24.679 [2024-10-07 11:41:06.370714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.679 [2024-10-07 11:41:06.370763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.679 [2024-10-07 11:41:06.370775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:24.679 [2024-10-07 11:41:06.370787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:24.679 [2024-10-07 11:41:06.370797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.679 [2024-10-07 11:41:06.371287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.679 [2024-10-07 11:41:06.371310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:24.679 [2024-10-07 11:41:06.371321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.412 ms 00:30:24.679 [2024-10-07 11:41:06.371337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.679 [2024-10-07 11:41:06.371456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.679 [2024-10-07 11:41:06.371469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:24.680 [2024-10-07 11:41:06.371480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:30:24.680 [2024-10-07 11:41:06.371490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.938 [2024-10-07 11:41:06.389939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.938 [2024-10-07 11:41:06.389973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:24.938 [2024-10-07 11:41:06.390003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.457 ms 00:30:24.938 [2024-10-07 11:41:06.390014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.938 [2024-10-07 11:41:06.409234] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:24.938 [2024-10-07 11:41:06.409271] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:24.939 [2024-10-07 11:41:06.409303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.939 [2024-10-07 11:41:06.409314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:24.939 [2024-10-07 11:41:06.409326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.206 ms 00:30:24.939 [2024-10-07 11:41:06.409336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.939 [2024-10-07 11:41:06.438713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.939 [2024-10-07 11:41:06.438758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:24.939 [2024-10-07 11:41:06.438773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.383 ms 00:30:24.939 [2024-10-07 11:41:06.438784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.939 [2024-10-07 11:41:06.457144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.939 [2024-10-07 11:41:06.457176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:24.939 [2024-10-07 11:41:06.457205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.342 ms 00:30:24.939 [2024-10-07 11:41:06.457216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.939 [2024-10-07 11:41:06.475424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.939 [2024-10-07 11:41:06.475458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:24.939 [2024-10-07 11:41:06.475470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.188 ms 00:30:24.939 [2024-10-07 11:41:06.475480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.939 [2024-10-07 11:41:06.476219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.939 [2024-10-07 11:41:06.476250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:24.939 [2024-10-07 11:41:06.476263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.626 ms 00:30:24.939 [2024-10-07 11:41:06.476274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.939 [2024-10-07 11:41:06.562206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.939 [2024-10-07 11:41:06.562271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:24.939 [2024-10-07 11:41:06.562293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.049 ms 00:30:24.939 [2024-10-07 11:41:06.562306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.939 [2024-10-07 11:41:06.573169] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:24.939 [2024-10-07 11:41:06.576044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.939 [2024-10-07 11:41:06.576075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:24.939 [2024-10-07 11:41:06.576088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.709 ms 00:30:24.939 [2024-10-07 11:41:06.576104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.939 [2024-10-07 11:41:06.576191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.939 [2024-10-07 11:41:06.576205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:24.939 [2024-10-07 11:41:06.576218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:24.939 [2024-10-07 11:41:06.576228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.939 [2024-10-07 11:41:06.577101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.939 [2024-10-07 11:41:06.577129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:24.939 [2024-10-07 11:41:06.577141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.812 ms 00:30:24.939 [2024-10-07 11:41:06.577151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.939 [2024-10-07 11:41:06.577184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.939 [2024-10-07 11:41:06.577195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:24.939 [2024-10-07 11:41:06.577206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:24.939 [2024-10-07 11:41:06.577216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.939 [2024-10-07 11:41:06.577251] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:24.939 [2024-10-07 11:41:06.577264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.939 [2024-10-07 11:41:06.577274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:24.939 [2024-10-07 11:41:06.577285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:30:24.939 [2024-10-07 11:41:06.577298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.939 [2024-10-07 11:41:06.613360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.939 [2024-10-07 11:41:06.613415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:24.939 [2024-10-07 11:41:06.613430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.101 ms 00:30:24.939 [2024-10-07 11:41:06.613441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.939 [2024-10-07 11:41:06.613520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.939 [2024-10-07 11:41:06.613532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:24.939 [2024-10-07 11:41:06.613543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:30:24.939 [2024-10-07 11:41:06.613553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.939 [2024-10-07 11:41:06.614601] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 374.684 ms, result 0 00:30:26.316  [2024-10-07T11:41:08.964Z] Copying: 29/1024 [MB] (29 MBps) [2024-10-07T11:41:09.897Z] Copying: 59/1024 [MB] (29 MBps) [2024-10-07T11:41:10.832Z] Copying: 89/1024 [MB] (29 MBps) [2024-10-07T11:41:12.205Z] Copying: 119/1024 [MB] (30 MBps) [2024-10-07T11:41:13.142Z] Copying: 148/1024 [MB] (29 MBps) [2024-10-07T11:41:14.075Z] Copying: 176/1024 [MB] (28 MBps) [2024-10-07T11:41:15.010Z] Copying: 205/1024 [MB] (28 MBps) [2024-10-07T11:41:15.942Z] Copying: 233/1024 [MB] (28 MBps) [2024-10-07T11:41:16.877Z] Copying: 262/1024 [MB] (28 MBps) [2024-10-07T11:41:18.250Z] Copying: 290/1024 [MB] (28 MBps) [2024-10-07T11:41:18.814Z] Copying: 318/1024 [MB] (28 MBps) [2024-10-07T11:41:20.188Z] Copying: 348/1024 [MB] (29 MBps) [2024-10-07T11:41:21.122Z] Copying: 377/1024 [MB] (29 MBps) [2024-10-07T11:41:22.061Z] Copying: 407/1024 [MB] (29 MBps) [2024-10-07T11:41:23.004Z] Copying: 435/1024 [MB] (28 MBps) [2024-10-07T11:41:23.937Z] Copying: 464/1024 [MB] (28 MBps) [2024-10-07T11:41:24.871Z] Copying: 492/1024 [MB] (27 MBps) [2024-10-07T11:41:25.805Z] Copying: 520/1024 [MB] (28 MBps) [2024-10-07T11:41:27.180Z] Copying: 549/1024 [MB] (29 MBps) [2024-10-07T11:41:28.167Z] Copying: 578/1024 [MB] (28 MBps) [2024-10-07T11:41:29.103Z] Copying: 608/1024 [MB] (29 MBps) [2024-10-07T11:41:30.037Z] Copying: 636/1024 [MB] (28 MBps) [2024-10-07T11:41:30.972Z] Copying: 665/1024 [MB] (28 MBps) [2024-10-07T11:41:31.906Z] Copying: 694/1024 [MB] (28 MBps) [2024-10-07T11:41:32.840Z] Copying: 721/1024 [MB] (27 MBps) [2024-10-07T11:41:33.804Z] Copying: 749/1024 [MB] (27 MBps) [2024-10-07T11:41:35.177Z] Copying: 776/1024 [MB] (27 MBps) [2024-10-07T11:41:36.111Z] Copying: 804/1024 [MB] (28 MBps) [2024-10-07T11:41:37.043Z] Copying: 833/1024 [MB] (28 MBps) [2024-10-07T11:41:37.977Z] Copying: 861/1024 [MB] (28 MBps) [2024-10-07T11:41:38.910Z] Copying: 890/1024 [MB] (28 MBps) [2024-10-07T11:41:39.844Z] Copying: 918/1024 [MB] (28 MBps) [2024-10-07T11:41:40.777Z] Copying: 947/1024 [MB] (28 MBps) [2024-10-07T11:41:42.153Z] Copying: 975/1024 [MB] (28 MBps) [2024-10-07T11:41:42.721Z] Copying: 1003/1024 [MB] (27 MBps) [2024-10-07T11:41:42.721Z] Copying: 1024/1024 [MB] (average 28 MBps)[2024-10-07 11:41:42.662296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.010 [2024-10-07 11:41:42.662370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:01.010 [2024-10-07 11:41:42.662396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:01.010 [2024-10-07 11:41:42.662413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.010 [2024-10-07 11:41:42.662456] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:01.010 [2024-10-07 11:41:42.667964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.010 [2024-10-07 11:41:42.668003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:01.010 [2024-10-07 11:41:42.668018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.490 ms 00:31:01.010 [2024-10-07 11:41:42.668030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.010 [2024-10-07 11:41:42.668262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.010 [2024-10-07 11:41:42.668275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:01.010 [2024-10-07 11:41:42.668288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:31:01.010 [2024-10-07 11:41:42.668300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.010 [2024-10-07 11:41:42.671509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.010 [2024-10-07 11:41:42.671539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:31:01.010 [2024-10-07 11:41:42.671553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.191 ms 00:31:01.010 [2024-10-07 11:41:42.671566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.010 [2024-10-07 11:41:42.676762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.010 [2024-10-07 11:41:42.676798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:31:01.010 [2024-10-07 11:41:42.676810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.178 ms 00:31:01.010 [2024-10-07 11:41:42.676820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.010 [2024-10-07 11:41:42.713614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.010 [2024-10-07 11:41:42.713650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:31:01.010 [2024-10-07 11:41:42.713665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.772 ms 00:31:01.010 [2024-10-07 11:41:42.713676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.270 [2024-10-07 11:41:42.734821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.270 [2024-10-07 11:41:42.734866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:31:01.270 [2024-10-07 11:41:42.734881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.137 ms 00:31:01.270 [2024-10-07 11:41:42.734892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.270 [2024-10-07 11:41:42.736979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.270 [2024-10-07 11:41:42.737012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:31:01.270 [2024-10-07 11:41:42.737024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.045 ms 00:31:01.270 [2024-10-07 11:41:42.737036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.270 [2024-10-07 11:41:42.773559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.270 [2024-10-07 11:41:42.773594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:31:01.270 [2024-10-07 11:41:42.773608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.564 ms 00:31:01.270 [2024-10-07 11:41:42.773619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.270 [2024-10-07 11:41:42.810206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.270 [2024-10-07 11:41:42.810240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:31:01.270 [2024-10-07 11:41:42.810253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.603 ms 00:31:01.270 [2024-10-07 11:41:42.810263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.270 [2024-10-07 11:41:42.845917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.270 [2024-10-07 11:41:42.845950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:31:01.270 [2024-10-07 11:41:42.845979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.666 ms 00:31:01.270 [2024-10-07 11:41:42.845989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.270 [2024-10-07 11:41:42.881603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.270 [2024-10-07 11:41:42.881636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:31:01.270 [2024-10-07 11:41:42.881649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.591 ms 00:31:01.270 [2024-10-07 11:41:42.881661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.270 [2024-10-07 11:41:42.881699] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:01.270 [2024-10-07 11:41:42.881715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:31:01.271 [2024-10-07 11:41:42.881728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:31:01.271 [2024-10-07 11:41:42.881748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.881760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.881771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.881782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.881799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.881809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.881820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.881831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.881841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.881851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.881861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.881872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.881882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.881892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.881903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.881914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.881924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.881935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.881945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.881955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.881965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.881975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.881985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.881995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.882021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.882032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.882043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.882053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.882064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.882074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.882085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.882096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.882107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.882118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.882128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.882139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.882150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.882160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.882171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.882182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.882193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.882203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.882213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.882223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.882234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.882244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.882255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.882266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.882276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.882295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.882307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.882317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.882328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.882339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.882350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.882360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.882371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.882381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.882392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.882403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.882414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.882424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.882435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.882445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.882457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.882467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.882478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.882488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.882498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.882509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.882520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.882530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.882540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.882550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.882561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.882571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.882582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.882592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.882603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.882614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.882625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.882635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.882647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.882657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.882668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.882678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.882689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.882699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.882710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.882720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.882731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.882749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.882760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:01.271 [2024-10-07 11:41:42.882771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:01.272 [2024-10-07 11:41:42.882781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:01.272 [2024-10-07 11:41:42.882792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:01.272 [2024-10-07 11:41:42.882803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:01.272 [2024-10-07 11:41:42.882814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:01.272 [2024-10-07 11:41:42.882833] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:01.272 [2024-10-07 11:41:42.882843] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f8a2eac3-8451-4d8a-b5b0-3d20daacab36 00:31:01.272 [2024-10-07 11:41:42.882854] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:31:01.272 [2024-10-07 11:41:42.882865] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:31:01.272 [2024-10-07 11:41:42.882875] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:31:01.272 [2024-10-07 11:41:42.882885] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:31:01.272 [2024-10-07 11:41:42.882895] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:01.272 [2024-10-07 11:41:42.882905] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:01.272 [2024-10-07 11:41:42.882919] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:01.272 [2024-10-07 11:41:42.882929] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:01.272 [2024-10-07 11:41:42.882938] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:01.272 [2024-10-07 11:41:42.882954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.272 [2024-10-07 11:41:42.882975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:01.272 [2024-10-07 11:41:42.882987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.259 ms 00:31:01.272 [2024-10-07 11:41:42.882998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.272 [2024-10-07 11:41:42.902575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.272 [2024-10-07 11:41:42.902606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:01.272 [2024-10-07 11:41:42.902620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.552 ms 00:31:01.272 [2024-10-07 11:41:42.902636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.272 [2024-10-07 11:41:42.903194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.272 [2024-10-07 11:41:42.903215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:01.272 [2024-10-07 11:41:42.903227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.536 ms 00:31:01.272 [2024-10-07 11:41:42.903237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.272 [2024-10-07 11:41:42.948343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:01.272 [2024-10-07 11:41:42.948377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:01.272 [2024-10-07 11:41:42.948396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:01.272 [2024-10-07 11:41:42.948407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.272 [2024-10-07 11:41:42.948460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:01.272 [2024-10-07 11:41:42.948471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:01.272 [2024-10-07 11:41:42.948482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:01.272 [2024-10-07 11:41:42.948492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.272 [2024-10-07 11:41:42.948561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:01.272 [2024-10-07 11:41:42.948575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:01.272 [2024-10-07 11:41:42.948586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:01.272 [2024-10-07 11:41:42.948601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.272 [2024-10-07 11:41:42.948618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:01.272 [2024-10-07 11:41:42.948629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:01.272 [2024-10-07 11:41:42.948639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:01.272 [2024-10-07 11:41:42.948649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.531 [2024-10-07 11:41:43.073961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:01.531 [2024-10-07 11:41:43.074017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:01.531 [2024-10-07 11:41:43.074038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:01.531 [2024-10-07 11:41:43.074049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.531 [2024-10-07 11:41:43.175999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:01.531 [2024-10-07 11:41:43.176062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:01.531 [2024-10-07 11:41:43.176078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:01.531 [2024-10-07 11:41:43.176090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.531 [2024-10-07 11:41:43.176182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:01.531 [2024-10-07 11:41:43.176195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:01.531 [2024-10-07 11:41:43.176206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:01.531 [2024-10-07 11:41:43.176216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.531 [2024-10-07 11:41:43.176270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:01.531 [2024-10-07 11:41:43.176282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:01.531 [2024-10-07 11:41:43.176292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:01.531 [2024-10-07 11:41:43.176303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.531 [2024-10-07 11:41:43.176424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:01.531 [2024-10-07 11:41:43.176437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:01.531 [2024-10-07 11:41:43.176448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:01.531 [2024-10-07 11:41:43.176458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.531 [2024-10-07 11:41:43.176498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:01.531 [2024-10-07 11:41:43.176511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:01.531 [2024-10-07 11:41:43.176521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:01.531 [2024-10-07 11:41:43.176532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.531 [2024-10-07 11:41:43.176577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:01.531 [2024-10-07 11:41:43.176590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:01.531 [2024-10-07 11:41:43.176600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:01.531 [2024-10-07 11:41:43.176610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.531 [2024-10-07 11:41:43.176656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:01.531 [2024-10-07 11:41:43.176669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:01.531 [2024-10-07 11:41:43.176680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:01.531 [2024-10-07 11:41:43.176690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.531 [2024-10-07 11:41:43.176841] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 515.364 ms, result 0 00:31:02.908 00:31:02.908 00:31:02.908 11:41:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:31:04.813 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:31:04.813 11:41:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:31:04.813 11:41:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:31:04.813 11:41:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:04.813 11:41:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:31:04.813 11:41:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:31:04.814 11:41:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:31:04.814 11:41:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:31:04.814 11:41:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 78978 00:31:04.814 11:41:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@950 -- # '[' -z 78978 ']' 00:31:04.814 11:41:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # kill -0 78978 00:31:04.814 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (78978) - No such process 00:31:04.814 Process with pid 78978 is not found 00:31:04.814 11:41:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@977 -- # echo 'Process with pid 78978 is not found' 00:31:04.814 11:41:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:31:05.073 11:41:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:31:05.073 Remove shared memory files 00:31:05.073 11:41:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:31:05.073 11:41:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:31:05.073 11:41:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:31:05.073 11:41:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:31:05.073 11:41:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:31:05.073 11:41:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:31:05.073 ************************************ 00:31:05.073 END TEST ftl_dirty_shutdown 00:31:05.073 ************************************ 00:31:05.073 00:31:05.073 real 3m29.442s 00:31:05.073 user 3m56.860s 00:31:05.073 sys 0m37.897s 00:31:05.073 11:41:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:05.073 11:41:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:05.331 11:41:46 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:31:05.331 11:41:46 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:31:05.331 11:41:46 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:05.331 11:41:46 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:05.331 ************************************ 00:31:05.331 START TEST ftl_upgrade_shutdown 00:31:05.331 ************************************ 00:31:05.331 11:41:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:31:05.332 * Looking for test storage... 00:31:05.332 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:31:05.332 11:41:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:05.332 11:41:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:31:05.332 11:41:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:05.591 11:41:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:05.591 11:41:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:05.591 11:41:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:05.591 11:41:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:05.591 11:41:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:31:05.591 11:41:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:31:05.591 11:41:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:31:05.591 11:41:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:31:05.591 11:41:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:31:05.591 11:41:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:31:05.591 11:41:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:31:05.591 11:41:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:05.591 11:41:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:31:05.591 11:41:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:31:05.591 11:41:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:05.591 11:41:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:05.591 11:41:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:31:05.591 11:41:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:31:05.591 11:41:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:05.591 11:41:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:31:05.591 11:41:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:31:05.591 11:41:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:31:05.591 11:41:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:31:05.591 11:41:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:05.591 11:41:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:31:05.591 11:41:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:31:05.591 11:41:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:05.591 11:41:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:05.591 11:41:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:31:05.591 11:41:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:05.591 11:41:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:05.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:05.591 --rc genhtml_branch_coverage=1 00:31:05.591 --rc genhtml_function_coverage=1 00:31:05.591 --rc genhtml_legend=1 00:31:05.591 --rc geninfo_all_blocks=1 00:31:05.591 --rc geninfo_unexecuted_blocks=1 00:31:05.591 00:31:05.591 ' 00:31:05.591 11:41:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:05.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:05.591 --rc genhtml_branch_coverage=1 00:31:05.591 --rc genhtml_function_coverage=1 00:31:05.591 --rc genhtml_legend=1 00:31:05.591 --rc geninfo_all_blocks=1 00:31:05.591 --rc geninfo_unexecuted_blocks=1 00:31:05.591 00:31:05.591 ' 00:31:05.591 11:41:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:05.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:05.591 --rc genhtml_branch_coverage=1 00:31:05.592 --rc genhtml_function_coverage=1 00:31:05.592 --rc genhtml_legend=1 00:31:05.592 --rc geninfo_all_blocks=1 00:31:05.592 --rc geninfo_unexecuted_blocks=1 00:31:05.592 00:31:05.592 ' 00:31:05.592 11:41:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:05.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:05.592 --rc genhtml_branch_coverage=1 00:31:05.592 --rc genhtml_function_coverage=1 00:31:05.592 --rc genhtml_legend=1 00:31:05.592 --rc geninfo_all_blocks=1 00:31:05.592 --rc geninfo_unexecuted_blocks=1 00:31:05.592 00:31:05.592 ' 00:31:05.592 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:31:05.592 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:31:05.592 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:31:05.592 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:31:05.592 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:31:05.592 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:31:05.592 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:05.592 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:31:05.592 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:31:05.592 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:05.592 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:05.592 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:31:05.592 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:31:05.592 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:05.592 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:05.592 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:31:05.592 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:31:05.592 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:05.592 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:05.592 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:31:05.592 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:31:05.592 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:31:05.592 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:31:05.592 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:05.592 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:05.592 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:31:05.592 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:31:05.592 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:05.592 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:05.592 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:05.592 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:31:05.592 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:31:05.592 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:31:05.592 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:31:05.592 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:31:05.592 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:31:05.592 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:31:05.592 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:31:05.592 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:31:05.592 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:31:05.592 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:31:05.592 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:31:05.592 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:31:05.592 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:31:05.592 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:31:05.592 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:05.592 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=81235 00:31:05.592 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:31:05.592 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 81235 00:31:05.592 11:41:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 81235 ']' 00:31:05.592 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:31:05.592 11:41:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:05.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:05.592 11:41:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:05.592 11:41:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:05.592 11:41:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:05.592 11:41:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:05.592 [2024-10-07 11:41:47.211101] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:31:05.592 [2024-10-07 11:41:47.211224] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81235 ] 00:31:05.851 [2024-10-07 11:41:47.384666] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:06.109 [2024-10-07 11:41:47.641624] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:31:07.046 11:41:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:07.046 11:41:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:31:07.047 11:41:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:07.047 11:41:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:31:07.047 11:41:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:31:07.047 11:41:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:31:07.047 11:41:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:31:07.047 11:41:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:31:07.047 11:41:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:31:07.047 11:41:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:31:07.047 11:41:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:31:07.047 11:41:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:31:07.047 11:41:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:31:07.047 11:41:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:31:07.047 11:41:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:31:07.047 11:41:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:31:07.047 11:41:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:31:07.047 11:41:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:31:07.047 11:41:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:31:07.047 11:41:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:31:07.047 11:41:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:31:07.047 11:41:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:31:07.047 11:41:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:31:07.313 11:41:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:31:07.313 11:41:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:31:07.313 11:41:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:31:07.313 11:41:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=basen1 00:31:07.313 11:41:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:31:07.313 11:41:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:31:07.313 11:41:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:31:07.313 11:41:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:31:07.573 11:41:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:31:07.573 { 00:31:07.573 "name": "basen1", 00:31:07.573 "aliases": [ 00:31:07.573 "e1e4ee0a-0d61-4d49-aacb-eef46b23c393" 00:31:07.573 ], 00:31:07.573 "product_name": "NVMe disk", 00:31:07.573 "block_size": 4096, 00:31:07.573 "num_blocks": 1310720, 00:31:07.573 "uuid": "e1e4ee0a-0d61-4d49-aacb-eef46b23c393", 00:31:07.573 "numa_id": -1, 00:31:07.573 "assigned_rate_limits": { 00:31:07.573 "rw_ios_per_sec": 0, 00:31:07.573 "rw_mbytes_per_sec": 0, 00:31:07.573 "r_mbytes_per_sec": 0, 00:31:07.573 "w_mbytes_per_sec": 0 00:31:07.573 }, 00:31:07.573 "claimed": true, 00:31:07.573 "claim_type": "read_many_write_one", 00:31:07.573 "zoned": false, 00:31:07.573 "supported_io_types": { 00:31:07.573 "read": true, 00:31:07.573 "write": true, 00:31:07.573 "unmap": true, 00:31:07.573 "flush": true, 00:31:07.573 "reset": true, 00:31:07.573 "nvme_admin": true, 00:31:07.573 "nvme_io": true, 00:31:07.573 "nvme_io_md": false, 00:31:07.573 "write_zeroes": true, 00:31:07.573 "zcopy": false, 00:31:07.573 "get_zone_info": false, 00:31:07.573 "zone_management": false, 00:31:07.573 "zone_append": false, 00:31:07.573 "compare": true, 00:31:07.573 "compare_and_write": false, 00:31:07.573 "abort": true, 00:31:07.573 "seek_hole": false, 00:31:07.573 "seek_data": false, 00:31:07.573 "copy": true, 00:31:07.573 "nvme_iov_md": false 00:31:07.573 }, 00:31:07.573 "driver_specific": { 00:31:07.573 "nvme": [ 00:31:07.573 { 00:31:07.573 "pci_address": "0000:00:11.0", 00:31:07.573 "trid": { 00:31:07.573 "trtype": "PCIe", 00:31:07.573 "traddr": "0000:00:11.0" 00:31:07.573 }, 00:31:07.573 "ctrlr_data": { 00:31:07.573 "cntlid": 0, 00:31:07.573 "vendor_id": "0x1b36", 00:31:07.573 "model_number": "QEMU NVMe Ctrl", 00:31:07.573 "serial_number": "12341", 00:31:07.573 "firmware_revision": "8.0.0", 00:31:07.573 "subnqn": "nqn.2019-08.org.qemu:12341", 00:31:07.573 "oacs": { 00:31:07.573 "security": 0, 00:31:07.573 "format": 1, 00:31:07.573 "firmware": 0, 00:31:07.573 "ns_manage": 1 00:31:07.573 }, 00:31:07.573 "multi_ctrlr": false, 00:31:07.573 "ana_reporting": false 00:31:07.573 }, 00:31:07.573 "vs": { 00:31:07.573 "nvme_version": "1.4" 00:31:07.573 }, 00:31:07.573 "ns_data": { 00:31:07.573 "id": 1, 00:31:07.573 "can_share": false 00:31:07.573 } 00:31:07.573 } 00:31:07.573 ], 00:31:07.573 "mp_policy": "active_passive" 00:31:07.573 } 00:31:07.573 } 00:31:07.573 ]' 00:31:07.573 11:41:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:31:07.573 11:41:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:31:07.573 11:41:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:31:07.573 11:41:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:31:07.573 11:41:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:31:07.573 11:41:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:31:07.573 11:41:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:31:07.573 11:41:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:31:07.573 11:41:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:31:07.573 11:41:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:07.573 11:41:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:31:07.832 11:41:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=8a963e53-2b69-4337-94a8-72fcb7d769c1 00:31:07.832 11:41:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:31:07.832 11:41:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8a963e53-2b69-4337-94a8-72fcb7d769c1 00:31:08.091 11:41:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:31:08.091 11:41:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=d1306d2b-3ff9-4f54-a0b8-9b5c3cd1671b 00:31:08.091 11:41:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u d1306d2b-3ff9-4f54-a0b8-9b5c3cd1671b 00:31:08.350 11:41:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=2bed831b-3754-4c0a-acb4-2ab0a3f0fb44 00:31:08.350 11:41:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 2bed831b-3754-4c0a-acb4-2ab0a3f0fb44 ]] 00:31:08.350 11:41:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 2bed831b-3754-4c0a-acb4-2ab0a3f0fb44 5120 00:31:08.350 11:41:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:31:08.350 11:41:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:31:08.350 11:41:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=2bed831b-3754-4c0a-acb4-2ab0a3f0fb44 00:31:08.350 11:41:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:31:08.350 11:41:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 2bed831b-3754-4c0a-acb4-2ab0a3f0fb44 00:31:08.350 11:41:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=2bed831b-3754-4c0a-acb4-2ab0a3f0fb44 00:31:08.350 11:41:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:31:08.350 11:41:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:31:08.350 11:41:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:31:08.350 11:41:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2bed831b-3754-4c0a-acb4-2ab0a3f0fb44 00:31:08.609 11:41:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:31:08.609 { 00:31:08.609 "name": "2bed831b-3754-4c0a-acb4-2ab0a3f0fb44", 00:31:08.609 "aliases": [ 00:31:08.609 "lvs/basen1p0" 00:31:08.609 ], 00:31:08.609 "product_name": "Logical Volume", 00:31:08.609 "block_size": 4096, 00:31:08.609 "num_blocks": 5242880, 00:31:08.609 "uuid": "2bed831b-3754-4c0a-acb4-2ab0a3f0fb44", 00:31:08.609 "assigned_rate_limits": { 00:31:08.609 "rw_ios_per_sec": 0, 00:31:08.609 "rw_mbytes_per_sec": 0, 00:31:08.609 "r_mbytes_per_sec": 0, 00:31:08.609 "w_mbytes_per_sec": 0 00:31:08.609 }, 00:31:08.609 "claimed": false, 00:31:08.609 "zoned": false, 00:31:08.609 "supported_io_types": { 00:31:08.609 "read": true, 00:31:08.609 "write": true, 00:31:08.609 "unmap": true, 00:31:08.609 "flush": false, 00:31:08.609 "reset": true, 00:31:08.609 "nvme_admin": false, 00:31:08.609 "nvme_io": false, 00:31:08.609 "nvme_io_md": false, 00:31:08.609 "write_zeroes": true, 00:31:08.609 "zcopy": false, 00:31:08.609 "get_zone_info": false, 00:31:08.609 "zone_management": false, 00:31:08.609 "zone_append": false, 00:31:08.609 "compare": false, 00:31:08.609 "compare_and_write": false, 00:31:08.609 "abort": false, 00:31:08.609 "seek_hole": true, 00:31:08.609 "seek_data": true, 00:31:08.609 "copy": false, 00:31:08.609 "nvme_iov_md": false 00:31:08.609 }, 00:31:08.609 "driver_specific": { 00:31:08.609 "lvol": { 00:31:08.609 "lvol_store_uuid": "d1306d2b-3ff9-4f54-a0b8-9b5c3cd1671b", 00:31:08.609 "base_bdev": "basen1", 00:31:08.609 "thin_provision": true, 00:31:08.609 "num_allocated_clusters": 0, 00:31:08.609 "snapshot": false, 00:31:08.609 "clone": false, 00:31:08.609 "esnap_clone": false 00:31:08.609 } 00:31:08.609 } 00:31:08.609 } 00:31:08.609 ]' 00:31:08.609 11:41:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:31:08.609 11:41:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:31:08.609 11:41:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:31:08.609 11:41:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=5242880 00:31:08.609 11:41:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=20480 00:31:08.609 11:41:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 20480 00:31:08.609 11:41:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:31:08.609 11:41:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:31:08.609 11:41:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:31:08.868 11:41:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:31:08.868 11:41:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:31:08.868 11:41:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:31:09.128 11:41:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:31:09.128 11:41:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:31:09.128 11:41:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 2bed831b-3754-4c0a-acb4-2ab0a3f0fb44 -c cachen1p0 --l2p_dram_limit 2 00:31:09.388 [2024-10-07 11:41:50.946988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:09.388 [2024-10-07 11:41:50.947045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:31:09.388 [2024-10-07 11:41:50.947064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:31:09.388 [2024-10-07 11:41:50.947075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:09.388 [2024-10-07 11:41:50.947139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:09.388 [2024-10-07 11:41:50.947151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:09.388 [2024-10-07 11:41:50.947165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:31:09.388 [2024-10-07 11:41:50.947175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:09.388 [2024-10-07 11:41:50.947201] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:31:09.388 [2024-10-07 11:41:50.948243] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:31:09.388 [2024-10-07 11:41:50.948279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:09.388 [2024-10-07 11:41:50.948291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:09.388 [2024-10-07 11:41:50.948305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.083 ms 00:31:09.388 [2024-10-07 11:41:50.948320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:09.388 [2024-10-07 11:41:50.948401] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID fc0d9e76-6a2e-491c-aad8-b95ec45f2ca9 00:31:09.388 [2024-10-07 11:41:50.949812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:09.388 [2024-10-07 11:41:50.949854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:31:09.388 [2024-10-07 11:41:50.949867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:31:09.388 [2024-10-07 11:41:50.949880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:09.388 [2024-10-07 11:41:50.957262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:09.388 [2024-10-07 11:41:50.957296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:09.388 [2024-10-07 11:41:50.957309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.348 ms 00:31:09.388 [2024-10-07 11:41:50.957322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:09.388 [2024-10-07 11:41:50.957369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:09.388 [2024-10-07 11:41:50.957386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:09.388 [2024-10-07 11:41:50.957397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:31:09.388 [2024-10-07 11:41:50.957414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:09.388 [2024-10-07 11:41:50.957475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:09.388 [2024-10-07 11:41:50.957489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:31:09.388 [2024-10-07 11:41:50.957501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:31:09.388 [2024-10-07 11:41:50.957513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:09.388 [2024-10-07 11:41:50.957539] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:31:09.388 [2024-10-07 11:41:50.962748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:09.388 [2024-10-07 11:41:50.962776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:09.388 [2024-10-07 11:41:50.962791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.213 ms 00:31:09.388 [2024-10-07 11:41:50.962802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:09.388 [2024-10-07 11:41:50.962835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:09.388 [2024-10-07 11:41:50.962846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:31:09.388 [2024-10-07 11:41:50.962860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:09.388 [2024-10-07 11:41:50.962872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:09.388 [2024-10-07 11:41:50.962919] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:31:09.388 [2024-10-07 11:41:50.963042] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:31:09.388 [2024-10-07 11:41:50.963063] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:31:09.389 [2024-10-07 11:41:50.963076] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:31:09.389 [2024-10-07 11:41:50.963095] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:31:09.389 [2024-10-07 11:41:50.963106] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:31:09.389 [2024-10-07 11:41:50.963120] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:31:09.389 [2024-10-07 11:41:50.963130] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:31:09.389 [2024-10-07 11:41:50.963143] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:31:09.389 [2024-10-07 11:41:50.963153] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:31:09.389 [2024-10-07 11:41:50.963166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:09.389 [2024-10-07 11:41:50.963176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:31:09.389 [2024-10-07 11:41:50.963190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.249 ms 00:31:09.389 [2024-10-07 11:41:50.963200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:09.389 [2024-10-07 11:41:50.963274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:09.389 [2024-10-07 11:41:50.963299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:31:09.389 [2024-10-07 11:41:50.963312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:31:09.389 [2024-10-07 11:41:50.963322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:09.389 [2024-10-07 11:41:50.963409] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:31:09.389 [2024-10-07 11:41:50.963421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:31:09.389 [2024-10-07 11:41:50.963434] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:09.389 [2024-10-07 11:41:50.963445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:09.389 [2024-10-07 11:41:50.963458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:31:09.389 [2024-10-07 11:41:50.963467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:31:09.389 [2024-10-07 11:41:50.963479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:31:09.389 [2024-10-07 11:41:50.963489] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:31:09.389 [2024-10-07 11:41:50.963501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:31:09.389 [2024-10-07 11:41:50.963510] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:09.389 [2024-10-07 11:41:50.963522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:31:09.389 [2024-10-07 11:41:50.963531] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:31:09.389 [2024-10-07 11:41:50.963543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:09.389 [2024-10-07 11:41:50.963552] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:31:09.389 [2024-10-07 11:41:50.963564] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:31:09.389 [2024-10-07 11:41:50.963576] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:09.389 [2024-10-07 11:41:50.963590] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:31:09.389 [2024-10-07 11:41:50.963600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:31:09.389 [2024-10-07 11:41:50.963611] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:09.389 [2024-10-07 11:41:50.963621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:31:09.389 [2024-10-07 11:41:50.963633] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:31:09.389 [2024-10-07 11:41:50.963642] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:09.389 [2024-10-07 11:41:50.963655] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:31:09.389 [2024-10-07 11:41:50.963664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:31:09.389 [2024-10-07 11:41:50.963676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:09.389 [2024-10-07 11:41:50.963685] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:31:09.389 [2024-10-07 11:41:50.963698] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:31:09.389 [2024-10-07 11:41:50.963707] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:09.389 [2024-10-07 11:41:50.963718] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:31:09.389 [2024-10-07 11:41:50.963728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:31:09.389 [2024-10-07 11:41:50.963756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:09.389 [2024-10-07 11:41:50.963767] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:31:09.389 [2024-10-07 11:41:50.963781] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:31:09.389 [2024-10-07 11:41:50.963790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:09.389 [2024-10-07 11:41:50.963802] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:31:09.389 [2024-10-07 11:41:50.963811] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:31:09.389 [2024-10-07 11:41:50.963823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:09.389 [2024-10-07 11:41:50.963832] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:31:09.389 [2024-10-07 11:41:50.963844] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:31:09.389 [2024-10-07 11:41:50.963853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:09.389 [2024-10-07 11:41:50.963865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:31:09.389 [2024-10-07 11:41:50.963874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:31:09.389 [2024-10-07 11:41:50.963885] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:09.389 [2024-10-07 11:41:50.963894] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:31:09.389 [2024-10-07 11:41:50.963907] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:31:09.389 [2024-10-07 11:41:50.963919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:09.389 [2024-10-07 11:41:50.963931] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:09.389 [2024-10-07 11:41:50.963942] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:31:09.389 [2024-10-07 11:41:50.963959] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:31:09.389 [2024-10-07 11:41:50.963968] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:31:09.389 [2024-10-07 11:41:50.963980] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:31:09.389 [2024-10-07 11:41:50.963989] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:31:09.389 [2024-10-07 11:41:50.964001] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:31:09.389 [2024-10-07 11:41:50.964015] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:31:09.389 [2024-10-07 11:41:50.964030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:09.389 [2024-10-07 11:41:50.964042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:31:09.389 [2024-10-07 11:41:50.964055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:31:09.389 [2024-10-07 11:41:50.964065] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:31:09.389 [2024-10-07 11:41:50.964078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:31:09.389 [2024-10-07 11:41:50.964088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:31:09.389 [2024-10-07 11:41:50.964101] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:31:09.389 [2024-10-07 11:41:50.964112] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:31:09.389 [2024-10-07 11:41:50.964124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:31:09.389 [2024-10-07 11:41:50.964134] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:31:09.389 [2024-10-07 11:41:50.964150] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:31:09.389 [2024-10-07 11:41:50.964160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:31:09.389 [2024-10-07 11:41:50.964173] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:31:09.389 [2024-10-07 11:41:50.964183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:31:09.389 [2024-10-07 11:41:50.964196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:31:09.389 [2024-10-07 11:41:50.964206] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:31:09.389 [2024-10-07 11:41:50.964219] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:09.389 [2024-10-07 11:41:50.964230] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:09.389 [2024-10-07 11:41:50.964244] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:31:09.389 [2024-10-07 11:41:50.964254] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:31:09.389 [2024-10-07 11:41:50.964267] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:31:09.389 [2024-10-07 11:41:50.964278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:09.389 [2024-10-07 11:41:50.964291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:31:09.389 [2024-10-07 11:41:50.964302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.929 ms 00:31:09.389 [2024-10-07 11:41:50.964314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:09.389 [2024-10-07 11:41:50.964358] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:31:09.389 [2024-10-07 11:41:50.964376] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:31:12.678 [2024-10-07 11:41:54.334346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.678 [2024-10-07 11:41:54.334422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:31:12.678 [2024-10-07 11:41:54.334441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3375.455 ms 00:31:12.678 [2024-10-07 11:41:54.334454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.678 [2024-10-07 11:41:54.373853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.678 [2024-10-07 11:41:54.373909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:12.678 [2024-10-07 11:41:54.373926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 39.091 ms 00:31:12.678 [2024-10-07 11:41:54.373940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.678 [2024-10-07 11:41:54.374054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.678 [2024-10-07 11:41:54.374077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:31:12.678 [2024-10-07 11:41:54.374089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:31:12.678 [2024-10-07 11:41:54.374105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.938 [2024-10-07 11:41:54.442079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.938 [2024-10-07 11:41:54.442159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:12.938 [2024-10-07 11:41:54.442196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 68.032 ms 00:31:12.938 [2024-10-07 11:41:54.442226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.938 [2024-10-07 11:41:54.442321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.938 [2024-10-07 11:41:54.442352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:12.938 [2024-10-07 11:41:54.442375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:31:12.938 [2024-10-07 11:41:54.442401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.938 [2024-10-07 11:41:54.443107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.938 [2024-10-07 11:41:54.443156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:12.938 [2024-10-07 11:41:54.443198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.566 ms 00:31:12.938 [2024-10-07 11:41:54.443231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.938 [2024-10-07 11:41:54.443306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.938 [2024-10-07 11:41:54.443333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:12.938 [2024-10-07 11:41:54.443356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:31:12.938 [2024-10-07 11:41:54.443385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.938 [2024-10-07 11:41:54.469637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.938 [2024-10-07 11:41:54.469680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:12.938 [2024-10-07 11:41:54.469695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.256 ms 00:31:12.938 [2024-10-07 11:41:54.469707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.938 [2024-10-07 11:41:54.482296] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:31:12.938 [2024-10-07 11:41:54.483340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.938 [2024-10-07 11:41:54.483370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:31:12.938 [2024-10-07 11:41:54.483385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.538 ms 00:31:12.938 [2024-10-07 11:41:54.483399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.938 [2024-10-07 11:41:54.516094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.938 [2024-10-07 11:41:54.516135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:31:12.938 [2024-10-07 11:41:54.516158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.713 ms 00:31:12.938 [2024-10-07 11:41:54.516170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.938 [2024-10-07 11:41:54.516263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.938 [2024-10-07 11:41:54.516277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:31:12.938 [2024-10-07 11:41:54.516294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.048 ms 00:31:12.938 [2024-10-07 11:41:54.516304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.938 [2024-10-07 11:41:54.552553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.938 [2024-10-07 11:41:54.552608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:31:12.938 [2024-10-07 11:41:54.552627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.249 ms 00:31:12.938 [2024-10-07 11:41:54.552638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.938 [2024-10-07 11:41:54.589192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.938 [2024-10-07 11:41:54.589228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:31:12.938 [2024-10-07 11:41:54.589245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.568 ms 00:31:12.938 [2024-10-07 11:41:54.589255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.938 [2024-10-07 11:41:54.589989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.938 [2024-10-07 11:41:54.590018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:31:12.938 [2024-10-07 11:41:54.590033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.692 ms 00:31:12.938 [2024-10-07 11:41:54.590043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:13.197 [2024-10-07 11:41:54.690366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:13.197 [2024-10-07 11:41:54.690413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:31:13.197 [2024-10-07 11:41:54.690435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 100.424 ms 00:31:13.197 [2024-10-07 11:41:54.690450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:13.197 [2024-10-07 11:41:54.727369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:13.197 [2024-10-07 11:41:54.727410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:31:13.197 [2024-10-07 11:41:54.727428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.892 ms 00:31:13.197 [2024-10-07 11:41:54.727439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:13.197 [2024-10-07 11:41:54.764019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:13.197 [2024-10-07 11:41:54.764070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:31:13.197 [2024-10-07 11:41:54.764088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.593 ms 00:31:13.197 [2024-10-07 11:41:54.764099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:13.197 [2024-10-07 11:41:54.801316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:13.197 [2024-10-07 11:41:54.801354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:31:13.197 [2024-10-07 11:41:54.801373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.231 ms 00:31:13.197 [2024-10-07 11:41:54.801383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:13.197 [2024-10-07 11:41:54.801435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:13.197 [2024-10-07 11:41:54.801447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:31:13.197 [2024-10-07 11:41:54.801465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:31:13.197 [2024-10-07 11:41:54.801477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:13.197 [2024-10-07 11:41:54.801585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:13.197 [2024-10-07 11:41:54.801598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:31:13.197 [2024-10-07 11:41:54.801611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:31:13.197 [2024-10-07 11:41:54.801621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:13.197 [2024-10-07 11:41:54.802768] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3861.537 ms, result 0 00:31:13.197 { 00:31:13.197 "name": "ftl", 00:31:13.197 "uuid": "fc0d9e76-6a2e-491c-aad8-b95ec45f2ca9" 00:31:13.197 } 00:31:13.197 11:41:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:31:13.456 [2024-10-07 11:41:55.009502] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:13.456 11:41:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:31:13.715 11:41:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:31:13.715 [2024-10-07 11:41:55.409282] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:31:13.974 11:41:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:31:13.974 [2024-10-07 11:41:55.602901] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:13.974 11:41:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:31:14.542 11:41:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:31:14.542 11:41:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:31:14.542 11:41:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:31:14.542 11:41:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:31:14.542 11:41:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:31:14.542 11:41:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:31:14.542 11:41:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:31:14.542 11:41:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:31:14.542 11:41:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:31:14.542 11:41:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:31:14.542 Fill FTL, iteration 1 00:31:14.542 11:41:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:31:14.542 11:41:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:31:14.542 11:41:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:14.542 11:41:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:14.542 11:41:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:14.542 11:41:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:31:14.542 11:41:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=81363 00:31:14.542 11:41:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:31:14.542 11:41:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:31:14.542 11:41:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 81363 /var/tmp/spdk.tgt.sock 00:31:14.542 11:41:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 81363 ']' 00:31:14.542 11:41:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:31:14.542 11:41:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:14.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:31:14.542 11:41:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:31:14.542 11:41:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:14.542 11:41:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:14.542 [2024-10-07 11:41:56.059513] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:31:14.542 [2024-10-07 11:41:56.059630] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81363 ] 00:31:14.542 [2024-10-07 11:41:56.230565] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:14.801 [2024-10-07 11:41:56.448256] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:31:15.735 11:41:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:15.735 11:41:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:31:15.735 11:41:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:31:15.994 ftln1 00:31:15.994 11:41:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:31:15.994 11:41:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:31:16.252 11:41:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:31:16.252 11:41:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 81363 00:31:16.252 11:41:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 81363 ']' 00:31:16.252 11:41:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 81363 00:31:16.252 11:41:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:31:16.252 11:41:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:16.252 11:41:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81363 00:31:16.252 11:41:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:16.252 killing process with pid 81363 00:31:16.252 11:41:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:16.252 11:41:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81363' 00:31:16.252 11:41:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 81363 00:31:16.252 11:41:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 81363 00:31:18.805 11:42:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:31:18.805 11:42:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:31:18.805 [2024-10-07 11:42:00.398668] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:31:18.805 [2024-10-07 11:42:00.398797] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81424 ] 00:31:19.076 [2024-10-07 11:42:00.570600] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:19.076 [2024-10-07 11:42:00.777985] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:31:20.975  [2024-10-07T11:42:03.253Z] Copying: 247/1024 [MB] (247 MBps) [2024-10-07T11:42:04.630Z] Copying: 492/1024 [MB] (245 MBps) [2024-10-07T11:42:05.568Z] Copying: 738/1024 [MB] (246 MBps) [2024-10-07T11:42:05.568Z] Copying: 986/1024 [MB] (248 MBps) [2024-10-07T11:42:06.946Z] Copying: 1024/1024 [MB] (average 246 MBps) 00:31:25.235 00:31:25.235 Calculate MD5 checksum, iteration 1 00:31:25.235 11:42:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:31:25.235 11:42:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:31:25.235 11:42:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:25.235 11:42:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:25.235 11:42:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:25.235 11:42:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:25.235 11:42:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:25.235 11:42:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:25.235 [2024-10-07 11:42:06.767795] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:31:25.235 [2024-10-07 11:42:06.768100] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81488 ] 00:31:25.235 [2024-10-07 11:42:06.942620] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:25.494 [2024-10-07 11:42:07.175103] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:31:27.398  [2024-10-07T11:42:09.109Z] Copying: 712/1024 [MB] (712 MBps) [2024-10-07T11:42:10.488Z] Copying: 1024/1024 [MB] (average 706 MBps) 00:31:28.777 00:31:28.777 11:42:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:31:28.777 11:42:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:30.680 11:42:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:31:30.680 Fill FTL, iteration 2 00:31:30.680 11:42:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=9263ad9e2ce20f0defe4d1cbf730c699 00:31:30.680 11:42:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:31:30.680 11:42:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:31:30.680 11:42:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:31:30.680 11:42:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:31:30.680 11:42:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:30.680 11:42:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:30.680 11:42:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:30.680 11:42:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:30.680 11:42:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:31:30.680 [2024-10-07 11:42:11.976505] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:31:30.680 [2024-10-07 11:42:11.976768] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81545 ] 00:31:30.680 [2024-10-07 11:42:12.148844] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:30.680 [2024-10-07 11:42:12.374982] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:31:32.633  [2024-10-07T11:42:14.912Z] Copying: 247/1024 [MB] (247 MBps) [2024-10-07T11:42:15.849Z] Copying: 497/1024 [MB] (250 MBps) [2024-10-07T11:42:17.227Z] Copying: 741/1024 [MB] (244 MBps) [2024-10-07T11:42:17.227Z] Copying: 988/1024 [MB] (247 MBps) [2024-10-07T11:42:18.603Z] Copying: 1024/1024 [MB] (average 246 MBps) 00:31:36.892 00:31:36.892 11:42:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:31:36.892 11:42:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:31:36.892 Calculate MD5 checksum, iteration 2 00:31:36.892 11:42:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:36.892 11:42:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:36.892 11:42:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:36.892 11:42:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:36.892 11:42:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:36.892 11:42:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:36.892 [2024-10-07 11:42:18.361223] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:31:36.892 [2024-10-07 11:42:18.361491] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81613 ] 00:31:36.892 [2024-10-07 11:42:18.533049] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:37.152 [2024-10-07 11:42:18.754924] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:31:39.061  [2024-10-07T11:42:21.337Z] Copying: 638/1024 [MB] (638 MBps) [2024-10-07T11:42:22.715Z] Copying: 1024/1024 [MB] (average 633 MBps) 00:31:41.004 00:31:41.004 11:42:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:31:41.004 11:42:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:42.907 11:42:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:31:42.907 11:42:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=6e9c9829a975a18dda2e2bf56c761f85 00:31:42.907 11:42:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:31:42.907 11:42:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:31:42.907 11:42:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:31:42.907 [2024-10-07 11:42:24.455986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:42.907 [2024-10-07 11:42:24.456060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:31:42.907 [2024-10-07 11:42:24.456077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:31:42.907 [2024-10-07 11:42:24.456093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:42.907 [2024-10-07 11:42:24.456122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:42.907 [2024-10-07 11:42:24.456154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:31:42.907 [2024-10-07 11:42:24.456166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:42.907 [2024-10-07 11:42:24.456177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:42.907 [2024-10-07 11:42:24.456198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:42.907 [2024-10-07 11:42:24.456211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:31:42.907 [2024-10-07 11:42:24.456222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:42.907 [2024-10-07 11:42:24.456232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:42.907 [2024-10-07 11:42:24.456316] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.308 ms, result 0 00:31:42.907 true 00:31:42.907 11:42:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:43.166 { 00:31:43.166 "name": "ftl", 00:31:43.166 "properties": [ 00:31:43.166 { 00:31:43.166 "name": "superblock_version", 00:31:43.166 "value": 5, 00:31:43.166 "read-only": true 00:31:43.166 }, 00:31:43.166 { 00:31:43.166 "name": "base_device", 00:31:43.166 "bands": [ 00:31:43.166 { 00:31:43.166 "id": 0, 00:31:43.166 "state": "FREE", 00:31:43.166 "validity": 0.0 00:31:43.166 }, 00:31:43.166 { 00:31:43.166 "id": 1, 00:31:43.166 "state": "FREE", 00:31:43.166 "validity": 0.0 00:31:43.166 }, 00:31:43.166 { 00:31:43.166 "id": 2, 00:31:43.166 "state": "FREE", 00:31:43.166 "validity": 0.0 00:31:43.166 }, 00:31:43.166 { 00:31:43.166 "id": 3, 00:31:43.166 "state": "FREE", 00:31:43.166 "validity": 0.0 00:31:43.166 }, 00:31:43.166 { 00:31:43.166 "id": 4, 00:31:43.166 "state": "FREE", 00:31:43.166 "validity": 0.0 00:31:43.166 }, 00:31:43.166 { 00:31:43.166 "id": 5, 00:31:43.166 "state": "FREE", 00:31:43.166 "validity": 0.0 00:31:43.166 }, 00:31:43.166 { 00:31:43.166 "id": 6, 00:31:43.166 "state": "FREE", 00:31:43.166 "validity": 0.0 00:31:43.166 }, 00:31:43.166 { 00:31:43.166 "id": 7, 00:31:43.166 "state": "FREE", 00:31:43.166 "validity": 0.0 00:31:43.166 }, 00:31:43.166 { 00:31:43.166 "id": 8, 00:31:43.166 "state": "FREE", 00:31:43.166 "validity": 0.0 00:31:43.166 }, 00:31:43.166 { 00:31:43.166 "id": 9, 00:31:43.166 "state": "FREE", 00:31:43.166 "validity": 0.0 00:31:43.166 }, 00:31:43.166 { 00:31:43.166 "id": 10, 00:31:43.166 "state": "FREE", 00:31:43.166 "validity": 0.0 00:31:43.166 }, 00:31:43.166 { 00:31:43.166 "id": 11, 00:31:43.166 "state": "FREE", 00:31:43.166 "validity": 0.0 00:31:43.166 }, 00:31:43.166 { 00:31:43.166 "id": 12, 00:31:43.166 "state": "FREE", 00:31:43.166 "validity": 0.0 00:31:43.166 }, 00:31:43.166 { 00:31:43.166 "id": 13, 00:31:43.166 "state": "FREE", 00:31:43.166 "validity": 0.0 00:31:43.166 }, 00:31:43.166 { 00:31:43.166 "id": 14, 00:31:43.166 "state": "FREE", 00:31:43.166 "validity": 0.0 00:31:43.166 }, 00:31:43.166 { 00:31:43.166 "id": 15, 00:31:43.166 "state": "FREE", 00:31:43.166 "validity": 0.0 00:31:43.166 }, 00:31:43.166 { 00:31:43.166 "id": 16, 00:31:43.166 "state": "FREE", 00:31:43.166 "validity": 0.0 00:31:43.166 }, 00:31:43.166 { 00:31:43.166 "id": 17, 00:31:43.166 "state": "FREE", 00:31:43.166 "validity": 0.0 00:31:43.166 } 00:31:43.166 ], 00:31:43.166 "read-only": true 00:31:43.166 }, 00:31:43.166 { 00:31:43.166 "name": "cache_device", 00:31:43.166 "type": "bdev", 00:31:43.166 "chunks": [ 00:31:43.166 { 00:31:43.166 "id": 0, 00:31:43.166 "state": "INACTIVE", 00:31:43.166 "utilization": 0.0 00:31:43.166 }, 00:31:43.166 { 00:31:43.166 "id": 1, 00:31:43.166 "state": "CLOSED", 00:31:43.166 "utilization": 1.0 00:31:43.166 }, 00:31:43.166 { 00:31:43.166 "id": 2, 00:31:43.166 "state": "CLOSED", 00:31:43.166 "utilization": 1.0 00:31:43.166 }, 00:31:43.166 { 00:31:43.166 "id": 3, 00:31:43.166 "state": "OPEN", 00:31:43.166 "utilization": 0.001953125 00:31:43.166 }, 00:31:43.166 { 00:31:43.166 "id": 4, 00:31:43.166 "state": "OPEN", 00:31:43.166 "utilization": 0.0 00:31:43.166 } 00:31:43.166 ], 00:31:43.166 "read-only": true 00:31:43.166 }, 00:31:43.166 { 00:31:43.166 "name": "verbose_mode", 00:31:43.166 "value": true, 00:31:43.166 "unit": "", 00:31:43.167 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:31:43.167 }, 00:31:43.167 { 00:31:43.167 "name": "prep_upgrade_on_shutdown", 00:31:43.167 "value": false, 00:31:43.167 "unit": "", 00:31:43.167 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:31:43.167 } 00:31:43.167 ] 00:31:43.167 } 00:31:43.167 11:42:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:31:43.167 [2024-10-07 11:42:24.863699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:43.167 [2024-10-07 11:42:24.863762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:31:43.167 [2024-10-07 11:42:24.863779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:31:43.167 [2024-10-07 11:42:24.863790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.167 [2024-10-07 11:42:24.863817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:43.167 [2024-10-07 11:42:24.863829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:31:43.167 [2024-10-07 11:42:24.863839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:43.167 [2024-10-07 11:42:24.863849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.167 [2024-10-07 11:42:24.863869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:43.167 [2024-10-07 11:42:24.863880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:31:43.167 [2024-10-07 11:42:24.863890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:43.167 [2024-10-07 11:42:24.863901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.167 [2024-10-07 11:42:24.863977] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.252 ms, result 0 00:31:43.167 true 00:31:43.425 11:42:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:31:43.425 11:42:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:43.425 11:42:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:31:43.425 11:42:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:31:43.425 11:42:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:31:43.426 11:42:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:31:43.685 [2024-10-07 11:42:25.267437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:43.685 [2024-10-07 11:42:25.267504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:31:43.685 [2024-10-07 11:42:25.267521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:31:43.685 [2024-10-07 11:42:25.267532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.685 [2024-10-07 11:42:25.267557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:43.685 [2024-10-07 11:42:25.267568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:31:43.685 [2024-10-07 11:42:25.267579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:43.685 [2024-10-07 11:42:25.267588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.685 [2024-10-07 11:42:25.267609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:43.685 [2024-10-07 11:42:25.267620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:31:43.685 [2024-10-07 11:42:25.267631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:43.685 [2024-10-07 11:42:25.267641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.685 [2024-10-07 11:42:25.267700] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.254 ms, result 0 00:31:43.685 true 00:31:43.685 11:42:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:43.944 { 00:31:43.944 "name": "ftl", 00:31:43.944 "properties": [ 00:31:43.944 { 00:31:43.944 "name": "superblock_version", 00:31:43.944 "value": 5, 00:31:43.944 "read-only": true 00:31:43.944 }, 00:31:43.944 { 00:31:43.944 "name": "base_device", 00:31:43.944 "bands": [ 00:31:43.944 { 00:31:43.944 "id": 0, 00:31:43.944 "state": "FREE", 00:31:43.944 "validity": 0.0 00:31:43.944 }, 00:31:43.944 { 00:31:43.944 "id": 1, 00:31:43.944 "state": "FREE", 00:31:43.944 "validity": 0.0 00:31:43.944 }, 00:31:43.944 { 00:31:43.944 "id": 2, 00:31:43.944 "state": "FREE", 00:31:43.944 "validity": 0.0 00:31:43.944 }, 00:31:43.944 { 00:31:43.944 "id": 3, 00:31:43.944 "state": "FREE", 00:31:43.944 "validity": 0.0 00:31:43.944 }, 00:31:43.944 { 00:31:43.944 "id": 4, 00:31:43.944 "state": "FREE", 00:31:43.944 "validity": 0.0 00:31:43.944 }, 00:31:43.944 { 00:31:43.944 "id": 5, 00:31:43.944 "state": "FREE", 00:31:43.944 "validity": 0.0 00:31:43.944 }, 00:31:43.944 { 00:31:43.944 "id": 6, 00:31:43.944 "state": "FREE", 00:31:43.944 "validity": 0.0 00:31:43.944 }, 00:31:43.944 { 00:31:43.944 "id": 7, 00:31:43.944 "state": "FREE", 00:31:43.944 "validity": 0.0 00:31:43.944 }, 00:31:43.944 { 00:31:43.944 "id": 8, 00:31:43.944 "state": "FREE", 00:31:43.944 "validity": 0.0 00:31:43.944 }, 00:31:43.944 { 00:31:43.944 "id": 9, 00:31:43.944 "state": "FREE", 00:31:43.944 "validity": 0.0 00:31:43.944 }, 00:31:43.944 { 00:31:43.944 "id": 10, 00:31:43.944 "state": "FREE", 00:31:43.944 "validity": 0.0 00:31:43.945 }, 00:31:43.945 { 00:31:43.945 "id": 11, 00:31:43.945 "state": "FREE", 00:31:43.945 "validity": 0.0 00:31:43.945 }, 00:31:43.945 { 00:31:43.945 "id": 12, 00:31:43.945 "state": "FREE", 00:31:43.945 "validity": 0.0 00:31:43.945 }, 00:31:43.945 { 00:31:43.945 "id": 13, 00:31:43.945 "state": "FREE", 00:31:43.945 "validity": 0.0 00:31:43.945 }, 00:31:43.945 { 00:31:43.945 "id": 14, 00:31:43.945 "state": "FREE", 00:31:43.945 "validity": 0.0 00:31:43.945 }, 00:31:43.945 { 00:31:43.945 "id": 15, 00:31:43.945 "state": "FREE", 00:31:43.945 "validity": 0.0 00:31:43.945 }, 00:31:43.945 { 00:31:43.945 "id": 16, 00:31:43.945 "state": "FREE", 00:31:43.945 "validity": 0.0 00:31:43.945 }, 00:31:43.945 { 00:31:43.945 "id": 17, 00:31:43.945 "state": "FREE", 00:31:43.945 "validity": 0.0 00:31:43.945 } 00:31:43.945 ], 00:31:43.945 "read-only": true 00:31:43.945 }, 00:31:43.945 { 00:31:43.945 "name": "cache_device", 00:31:43.945 "type": "bdev", 00:31:43.945 "chunks": [ 00:31:43.945 { 00:31:43.945 "id": 0, 00:31:43.945 "state": "INACTIVE", 00:31:43.945 "utilization": 0.0 00:31:43.945 }, 00:31:43.945 { 00:31:43.945 "id": 1, 00:31:43.945 "state": "CLOSED", 00:31:43.945 "utilization": 1.0 00:31:43.945 }, 00:31:43.945 { 00:31:43.945 "id": 2, 00:31:43.945 "state": "CLOSED", 00:31:43.945 "utilization": 1.0 00:31:43.945 }, 00:31:43.945 { 00:31:43.945 "id": 3, 00:31:43.945 "state": "OPEN", 00:31:43.945 "utilization": 0.001953125 00:31:43.945 }, 00:31:43.945 { 00:31:43.945 "id": 4, 00:31:43.945 "state": "OPEN", 00:31:43.945 "utilization": 0.0 00:31:43.945 } 00:31:43.945 ], 00:31:43.945 "read-only": true 00:31:43.945 }, 00:31:43.945 { 00:31:43.945 "name": "verbose_mode", 00:31:43.945 "value": true, 00:31:43.945 "unit": "", 00:31:43.945 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:31:43.945 }, 00:31:43.945 { 00:31:43.945 "name": "prep_upgrade_on_shutdown", 00:31:43.945 "value": true, 00:31:43.945 "unit": "", 00:31:43.945 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:31:43.945 } 00:31:43.945 ] 00:31:43.945 } 00:31:43.945 11:42:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:31:43.945 11:42:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 81235 ]] 00:31:43.945 11:42:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 81235 00:31:43.945 11:42:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 81235 ']' 00:31:43.945 11:42:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 81235 00:31:43.945 11:42:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:31:43.945 11:42:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:43.945 11:42:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81235 00:31:43.945 killing process with pid 81235 00:31:43.945 11:42:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:43.945 11:42:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:43.945 11:42:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81235' 00:31:43.945 11:42:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 81235 00:31:43.945 11:42:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 81235 00:31:45.324 [2024-10-07 11:42:26.659935] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:31:45.324 [2024-10-07 11:42:26.678210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.324 [2024-10-07 11:42:26.678267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:31:45.324 [2024-10-07 11:42:26.678292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:45.324 [2024-10-07 11:42:26.678312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:45.324 [2024-10-07 11:42:26.678337] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:31:45.324 [2024-10-07 11:42:26.682491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.324 [2024-10-07 11:42:26.682530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:31:45.324 [2024-10-07 11:42:26.682543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.141 ms 00:31:45.324 [2024-10-07 11:42:26.682554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:53.442 [2024-10-07 11:42:33.972363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:53.442 [2024-10-07 11:42:33.972440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:31:53.442 [2024-10-07 11:42:33.972460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7301.610 ms 00:31:53.442 [2024-10-07 11:42:33.972471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:53.442 [2024-10-07 11:42:33.973548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:53.442 [2024-10-07 11:42:33.973583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:31:53.442 [2024-10-07 11:42:33.973596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.060 ms 00:31:53.442 [2024-10-07 11:42:33.973607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:53.442 [2024-10-07 11:42:33.974596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:53.442 [2024-10-07 11:42:33.974637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:31:53.442 [2024-10-07 11:42:33.974651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.958 ms 00:31:53.442 [2024-10-07 11:42:33.974662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:53.442 [2024-10-07 11:42:33.989519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:53.442 [2024-10-07 11:42:33.989562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:31:53.442 [2024-10-07 11:42:33.989576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.824 ms 00:31:53.442 [2024-10-07 11:42:33.989588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:53.442 [2024-10-07 11:42:33.998974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:53.442 [2024-10-07 11:42:33.999020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:31:53.442 [2024-10-07 11:42:33.999034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.361 ms 00:31:53.442 [2024-10-07 11:42:33.999051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:53.442 [2024-10-07 11:42:33.999170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:53.442 [2024-10-07 11:42:33.999184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:31:53.442 [2024-10-07 11:42:33.999195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.079 ms 00:31:53.442 [2024-10-07 11:42:33.999206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:53.442 [2024-10-07 11:42:34.014099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:53.442 [2024-10-07 11:42:34.014137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:31:53.442 [2024-10-07 11:42:34.014166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.898 ms 00:31:53.442 [2024-10-07 11:42:34.014176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:53.442 [2024-10-07 11:42:34.028872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:53.442 [2024-10-07 11:42:34.028915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:31:53.442 [2024-10-07 11:42:34.028928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.682 ms 00:31:53.442 [2024-10-07 11:42:34.028938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:53.442 [2024-10-07 11:42:34.043721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:53.442 [2024-10-07 11:42:34.043766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:31:53.442 [2024-10-07 11:42:34.043779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.769 ms 00:31:53.442 [2024-10-07 11:42:34.043789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:53.442 [2024-10-07 11:42:34.058382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:53.442 [2024-10-07 11:42:34.058429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:31:53.442 [2024-10-07 11:42:34.058442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.535 ms 00:31:53.442 [2024-10-07 11:42:34.058452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:53.442 [2024-10-07 11:42:34.058505] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:31:53.442 [2024-10-07 11:42:34.058523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:31:53.442 [2024-10-07 11:42:34.058536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:31:53.442 [2024-10-07 11:42:34.058548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:31:53.442 [2024-10-07 11:42:34.058560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:53.442 [2024-10-07 11:42:34.058571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:53.442 [2024-10-07 11:42:34.058582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:53.442 [2024-10-07 11:42:34.058607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:53.442 [2024-10-07 11:42:34.058618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:53.442 [2024-10-07 11:42:34.058630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:53.442 [2024-10-07 11:42:34.058641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:53.442 [2024-10-07 11:42:34.058652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:53.442 [2024-10-07 11:42:34.058663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:53.442 [2024-10-07 11:42:34.058673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:53.442 [2024-10-07 11:42:34.058684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:53.442 [2024-10-07 11:42:34.058695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:53.442 [2024-10-07 11:42:34.058706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:53.442 [2024-10-07 11:42:34.058716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:53.442 [2024-10-07 11:42:34.058726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:53.442 [2024-10-07 11:42:34.058752] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:31:53.442 [2024-10-07 11:42:34.058763] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: fc0d9e76-6a2e-491c-aad8-b95ec45f2ca9 00:31:53.442 [2024-10-07 11:42:34.058775] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:31:53.442 [2024-10-07 11:42:34.058789] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:31:53.442 [2024-10-07 11:42:34.058799] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:31:53.442 [2024-10-07 11:42:34.058809] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:31:53.442 [2024-10-07 11:42:34.058823] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:31:53.442 [2024-10-07 11:42:34.058833] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:31:53.442 [2024-10-07 11:42:34.058843] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:31:53.442 [2024-10-07 11:42:34.058853] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:31:53.442 [2024-10-07 11:42:34.058864] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:31:53.442 [2024-10-07 11:42:34.058875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:53.442 [2024-10-07 11:42:34.058886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:31:53.442 [2024-10-07 11:42:34.058896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.371 ms 00:31:53.442 [2024-10-07 11:42:34.058907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:53.442 [2024-10-07 11:42:34.079041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:53.442 [2024-10-07 11:42:34.079079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:31:53.442 [2024-10-07 11:42:34.079093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.134 ms 00:31:53.442 [2024-10-07 11:42:34.079104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:53.442 [2024-10-07 11:42:34.079579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:53.442 [2024-10-07 11:42:34.079598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:31:53.442 [2024-10-07 11:42:34.079610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.453 ms 00:31:53.442 [2024-10-07 11:42:34.079621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:53.442 [2024-10-07 11:42:34.138270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:53.442 [2024-10-07 11:42:34.138331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:53.442 [2024-10-07 11:42:34.138351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:53.442 [2024-10-07 11:42:34.138369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:53.442 [2024-10-07 11:42:34.138410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:53.443 [2024-10-07 11:42:34.138423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:53.443 [2024-10-07 11:42:34.138434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:53.443 [2024-10-07 11:42:34.138445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:53.443 [2024-10-07 11:42:34.138547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:53.443 [2024-10-07 11:42:34.138562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:53.443 [2024-10-07 11:42:34.138574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:53.443 [2024-10-07 11:42:34.138584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:53.443 [2024-10-07 11:42:34.138603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:53.443 [2024-10-07 11:42:34.138614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:53.443 [2024-10-07 11:42:34.138625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:53.443 [2024-10-07 11:42:34.138635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:53.443 [2024-10-07 11:42:34.263101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:53.443 [2024-10-07 11:42:34.263154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:53.443 [2024-10-07 11:42:34.263171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:53.443 [2024-10-07 11:42:34.263182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:53.443 [2024-10-07 11:42:34.364931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:53.443 [2024-10-07 11:42:34.364986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:53.443 [2024-10-07 11:42:34.365002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:53.443 [2024-10-07 11:42:34.365013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:53.443 [2024-10-07 11:42:34.365123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:53.443 [2024-10-07 11:42:34.365143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:53.443 [2024-10-07 11:42:34.365154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:53.443 [2024-10-07 11:42:34.365165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:53.443 [2024-10-07 11:42:34.365217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:53.443 [2024-10-07 11:42:34.365229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:53.443 [2024-10-07 11:42:34.365240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:53.443 [2024-10-07 11:42:34.365251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:53.443 [2024-10-07 11:42:34.365357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:53.443 [2024-10-07 11:42:34.365370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:53.443 [2024-10-07 11:42:34.365385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:53.443 [2024-10-07 11:42:34.365396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:53.443 [2024-10-07 11:42:34.365435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:53.443 [2024-10-07 11:42:34.365448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:31:53.443 [2024-10-07 11:42:34.365458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:53.443 [2024-10-07 11:42:34.365468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:53.443 [2024-10-07 11:42:34.365507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:53.443 [2024-10-07 11:42:34.365518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:53.443 [2024-10-07 11:42:34.365542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:53.443 [2024-10-07 11:42:34.365552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:53.443 [2024-10-07 11:42:34.365603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:53.443 [2024-10-07 11:42:34.365615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:53.443 [2024-10-07 11:42:34.365626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:53.443 [2024-10-07 11:42:34.365637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:53.443 [2024-10-07 11:42:34.365779] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7700.005 ms, result 0 00:31:55.978 11:42:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:31:55.978 11:42:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:31:55.978 11:42:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:31:55.978 11:42:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:31:55.978 11:42:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:55.978 11:42:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=81816 00:31:55.978 11:42:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:55.978 11:42:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:31:55.978 11:42:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 81816 00:31:55.978 11:42:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 81816 ']' 00:31:55.978 11:42:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:56.237 11:42:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:56.237 11:42:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:56.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:56.237 11:42:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:56.237 11:42:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:56.237 [2024-10-07 11:42:37.802046] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:31:56.237 [2024-10-07 11:42:37.802171] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81816 ] 00:31:56.496 [2024-10-07 11:42:37.978696] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:56.496 [2024-10-07 11:42:38.183281] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:31:57.439 [2024-10-07 11:42:39.135897] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:57.439 [2024-10-07 11:42:39.135961] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:57.700 [2024-10-07 11:42:39.282567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:57.700 [2024-10-07 11:42:39.282610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:31:57.700 [2024-10-07 11:42:39.282629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:31:57.700 [2024-10-07 11:42:39.282639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:57.700 [2024-10-07 11:42:39.282688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:57.700 [2024-10-07 11:42:39.282699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:57.700 [2024-10-07 11:42:39.282710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:31:57.700 [2024-10-07 11:42:39.282720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:57.700 [2024-10-07 11:42:39.282762] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:31:57.700 [2024-10-07 11:42:39.283716] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:31:57.700 [2024-10-07 11:42:39.283756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:57.700 [2024-10-07 11:42:39.283768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:57.700 [2024-10-07 11:42:39.283779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.009 ms 00:31:57.700 [2024-10-07 11:42:39.283793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:57.700 [2024-10-07 11:42:39.285256] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:31:57.700 [2024-10-07 11:42:39.304554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:57.700 [2024-10-07 11:42:39.304590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:31:57.700 [2024-10-07 11:42:39.304619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.330 ms 00:31:57.700 [2024-10-07 11:42:39.304630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:57.700 [2024-10-07 11:42:39.304711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:57.700 [2024-10-07 11:42:39.304729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:31:57.700 [2024-10-07 11:42:39.304759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:31:57.700 [2024-10-07 11:42:39.304770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:57.700 [2024-10-07 11:42:39.311694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:57.700 [2024-10-07 11:42:39.311721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:57.700 [2024-10-07 11:42:39.311733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.847 ms 00:31:57.700 [2024-10-07 11:42:39.311752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:57.700 [2024-10-07 11:42:39.311817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:57.700 [2024-10-07 11:42:39.311830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:57.700 [2024-10-07 11:42:39.311841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:31:57.700 [2024-10-07 11:42:39.311854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:57.700 [2024-10-07 11:42:39.311900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:57.700 [2024-10-07 11:42:39.311912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:31:57.700 [2024-10-07 11:42:39.311923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:31:57.700 [2024-10-07 11:42:39.311933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:57.700 [2024-10-07 11:42:39.311959] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:31:57.700 [2024-10-07 11:42:39.316855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:57.700 [2024-10-07 11:42:39.316885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:57.700 [2024-10-07 11:42:39.316897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.909 ms 00:31:57.700 [2024-10-07 11:42:39.316907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:57.700 [2024-10-07 11:42:39.316935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:57.700 [2024-10-07 11:42:39.316946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:31:57.700 [2024-10-07 11:42:39.316961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:57.700 [2024-10-07 11:42:39.316970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:57.700 [2024-10-07 11:42:39.317027] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:31:57.700 [2024-10-07 11:42:39.317050] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:31:57.700 [2024-10-07 11:42:39.317084] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:31:57.700 [2024-10-07 11:42:39.317102] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:31:57.700 [2024-10-07 11:42:39.317193] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:31:57.700 [2024-10-07 11:42:39.317210] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:31:57.700 [2024-10-07 11:42:39.317223] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:31:57.700 [2024-10-07 11:42:39.317283] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:31:57.700 [2024-10-07 11:42:39.317296] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:31:57.700 [2024-10-07 11:42:39.317308] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:31:57.700 [2024-10-07 11:42:39.317318] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:31:57.700 [2024-10-07 11:42:39.317328] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:31:57.700 [2024-10-07 11:42:39.317338] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:31:57.700 [2024-10-07 11:42:39.317348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:57.700 [2024-10-07 11:42:39.317358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:31:57.700 [2024-10-07 11:42:39.317369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.325 ms 00:31:57.700 [2024-10-07 11:42:39.317381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:57.700 [2024-10-07 11:42:39.317456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:57.700 [2024-10-07 11:42:39.317466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:31:57.700 [2024-10-07 11:42:39.317477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:31:57.700 [2024-10-07 11:42:39.317486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:57.701 [2024-10-07 11:42:39.317577] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:31:57.701 [2024-10-07 11:42:39.317590] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:31:57.701 [2024-10-07 11:42:39.317600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:57.701 [2024-10-07 11:42:39.317610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:57.701 [2024-10-07 11:42:39.317624] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:31:57.701 [2024-10-07 11:42:39.317633] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:31:57.701 [2024-10-07 11:42:39.317643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:31:57.701 [2024-10-07 11:42:39.317653] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:31:57.701 [2024-10-07 11:42:39.317663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:31:57.701 [2024-10-07 11:42:39.317672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:57.701 [2024-10-07 11:42:39.317681] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:31:57.701 [2024-10-07 11:42:39.317693] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:31:57.701 [2024-10-07 11:42:39.317702] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:57.701 [2024-10-07 11:42:39.317711] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:31:57.701 [2024-10-07 11:42:39.317721] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:31:57.701 [2024-10-07 11:42:39.317741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:57.701 [2024-10-07 11:42:39.317761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:31:57.701 [2024-10-07 11:42:39.317770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:31:57.701 [2024-10-07 11:42:39.317780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:57.701 [2024-10-07 11:42:39.317789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:31:57.701 [2024-10-07 11:42:39.317798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:31:57.701 [2024-10-07 11:42:39.317807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:57.701 [2024-10-07 11:42:39.317816] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:31:57.701 [2024-10-07 11:42:39.317826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:31:57.701 [2024-10-07 11:42:39.317846] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:57.701 [2024-10-07 11:42:39.317854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:31:57.701 [2024-10-07 11:42:39.317864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:31:57.701 [2024-10-07 11:42:39.317872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:57.701 [2024-10-07 11:42:39.317882] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:31:57.701 [2024-10-07 11:42:39.317891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:31:57.701 [2024-10-07 11:42:39.317900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:57.701 [2024-10-07 11:42:39.317908] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:31:57.701 [2024-10-07 11:42:39.317917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:31:57.701 [2024-10-07 11:42:39.317926] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:57.701 [2024-10-07 11:42:39.317935] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:31:57.701 [2024-10-07 11:42:39.317944] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:31:57.701 [2024-10-07 11:42:39.317952] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:57.701 [2024-10-07 11:42:39.317961] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:31:57.701 [2024-10-07 11:42:39.317970] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:31:57.701 [2024-10-07 11:42:39.317979] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:57.701 [2024-10-07 11:42:39.317988] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:31:57.701 [2024-10-07 11:42:39.317996] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:31:57.701 [2024-10-07 11:42:39.318005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:57.701 [2024-10-07 11:42:39.318014] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:31:57.701 [2024-10-07 11:42:39.318025] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:31:57.701 [2024-10-07 11:42:39.318034] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:57.701 [2024-10-07 11:42:39.318044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:57.701 [2024-10-07 11:42:39.318054] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:31:57.701 [2024-10-07 11:42:39.318063] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:31:57.701 [2024-10-07 11:42:39.318071] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:31:57.701 [2024-10-07 11:42:39.318081] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:31:57.701 [2024-10-07 11:42:39.318090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:31:57.701 [2024-10-07 11:42:39.318099] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:31:57.701 [2024-10-07 11:42:39.318109] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:31:57.701 [2024-10-07 11:42:39.318122] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:57.701 [2024-10-07 11:42:39.318133] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:31:57.701 [2024-10-07 11:42:39.318143] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:31:57.701 [2024-10-07 11:42:39.318153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:31:57.701 [2024-10-07 11:42:39.318163] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:31:57.701 [2024-10-07 11:42:39.318172] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:31:57.701 [2024-10-07 11:42:39.318182] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:31:57.701 [2024-10-07 11:42:39.318192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:31:57.701 [2024-10-07 11:42:39.318203] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:31:57.701 [2024-10-07 11:42:39.318213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:31:57.701 [2024-10-07 11:42:39.318223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:31:57.701 [2024-10-07 11:42:39.318232] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:31:57.701 [2024-10-07 11:42:39.318243] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:31:57.701 [2024-10-07 11:42:39.318252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:31:57.701 [2024-10-07 11:42:39.318262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:31:57.701 [2024-10-07 11:42:39.318272] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:31:57.701 [2024-10-07 11:42:39.318293] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:57.701 [2024-10-07 11:42:39.318313] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:57.701 [2024-10-07 11:42:39.318332] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:31:57.701 [2024-10-07 11:42:39.318366] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:31:57.701 [2024-10-07 11:42:39.318384] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:31:57.701 [2024-10-07 11:42:39.318403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:57.701 [2024-10-07 11:42:39.318419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:31:57.701 [2024-10-07 11:42:39.318429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.881 ms 00:31:57.701 [2024-10-07 11:42:39.318443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:57.701 [2024-10-07 11:42:39.318491] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:31:57.701 [2024-10-07 11:42:39.318504] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:32:01.942 [2024-10-07 11:42:43.112768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.942 [2024-10-07 11:42:43.112834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:32:01.942 [2024-10-07 11:42:43.112850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3800.436 ms 00:32:01.942 [2024-10-07 11:42:43.112871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.942 [2024-10-07 11:42:43.151996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.942 [2024-10-07 11:42:43.152049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:32:01.942 [2024-10-07 11:42:43.152082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 38.882 ms 00:32:01.942 [2024-10-07 11:42:43.152093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.942 [2024-10-07 11:42:43.152190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.942 [2024-10-07 11:42:43.152203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:32:01.942 [2024-10-07 11:42:43.152214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:32:01.942 [2024-10-07 11:42:43.152224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.942 [2024-10-07 11:42:43.212313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.942 [2024-10-07 11:42:43.212354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:32:01.942 [2024-10-07 11:42:43.212385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 60.141 ms 00:32:01.942 [2024-10-07 11:42:43.212396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.942 [2024-10-07 11:42:43.212438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.942 [2024-10-07 11:42:43.212449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:32:01.942 [2024-10-07 11:42:43.212460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:32:01.942 [2024-10-07 11:42:43.212470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.942 [2024-10-07 11:42:43.212999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.942 [2024-10-07 11:42:43.213014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:32:01.942 [2024-10-07 11:42:43.213026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.447 ms 00:32:01.942 [2024-10-07 11:42:43.213036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.942 [2024-10-07 11:42:43.213080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.942 [2024-10-07 11:42:43.213091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:32:01.942 [2024-10-07 11:42:43.213102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:32:01.942 [2024-10-07 11:42:43.213112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.942 [2024-10-07 11:42:43.233021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.942 [2024-10-07 11:42:43.233057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:32:01.942 [2024-10-07 11:42:43.233086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.918 ms 00:32:01.942 [2024-10-07 11:42:43.233097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.942 [2024-10-07 11:42:43.252266] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:32:01.942 [2024-10-07 11:42:43.252305] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:32:01.942 [2024-10-07 11:42:43.252320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.942 [2024-10-07 11:42:43.252330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:32:01.942 [2024-10-07 11:42:43.252341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.122 ms 00:32:01.942 [2024-10-07 11:42:43.252351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.942 [2024-10-07 11:42:43.272473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.942 [2024-10-07 11:42:43.272509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:32:01.942 [2024-10-07 11:42:43.272523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.109 ms 00:32:01.942 [2024-10-07 11:42:43.272549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.942 [2024-10-07 11:42:43.290186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.942 [2024-10-07 11:42:43.290222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:32:01.942 [2024-10-07 11:42:43.290235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.613 ms 00:32:01.942 [2024-10-07 11:42:43.290244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.942 [2024-10-07 11:42:43.308011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.942 [2024-10-07 11:42:43.308048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:32:01.942 [2024-10-07 11:42:43.308076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.744 ms 00:32:01.942 [2024-10-07 11:42:43.308085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.942 [2024-10-07 11:42:43.308888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.943 [2024-10-07 11:42:43.308913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:32:01.943 [2024-10-07 11:42:43.308925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.699 ms 00:32:01.943 [2024-10-07 11:42:43.308935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.943 [2024-10-07 11:42:43.396233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.943 [2024-10-07 11:42:43.396284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:32:01.943 [2024-10-07 11:42:43.396316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 87.414 ms 00:32:01.943 [2024-10-07 11:42:43.396327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.943 [2024-10-07 11:42:43.407236] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:32:01.943 [2024-10-07 11:42:43.407927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.943 [2024-10-07 11:42:43.407947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:32:01.943 [2024-10-07 11:42:43.407964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.566 ms 00:32:01.943 [2024-10-07 11:42:43.407974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.943 [2024-10-07 11:42:43.408051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.943 [2024-10-07 11:42:43.408064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:32:01.943 [2024-10-07 11:42:43.408075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:32:01.943 [2024-10-07 11:42:43.408085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.943 [2024-10-07 11:42:43.408147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.943 [2024-10-07 11:42:43.408160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:32:01.943 [2024-10-07 11:42:43.408172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:32:01.943 [2024-10-07 11:42:43.408185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.943 [2024-10-07 11:42:43.408210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.943 [2024-10-07 11:42:43.408221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:32:01.943 [2024-10-07 11:42:43.408232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:32:01.943 [2024-10-07 11:42:43.408242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.943 [2024-10-07 11:42:43.408282] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:32:01.943 [2024-10-07 11:42:43.408294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.943 [2024-10-07 11:42:43.408304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:32:01.943 [2024-10-07 11:42:43.408314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:32:01.943 [2024-10-07 11:42:43.408324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.943 [2024-10-07 11:42:43.444019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.943 [2024-10-07 11:42:43.444064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:32:01.943 [2024-10-07 11:42:43.444094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.728 ms 00:32:01.943 [2024-10-07 11:42:43.444104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.943 [2024-10-07 11:42:43.444187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.943 [2024-10-07 11:42:43.444199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:32:01.943 [2024-10-07 11:42:43.444210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:32:01.943 [2024-10-07 11:42:43.444224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.943 [2024-10-07 11:42:43.445412] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4169.075 ms, result 0 00:32:01.943 [2024-10-07 11:42:43.460387] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:01.943 [2024-10-07 11:42:43.476380] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:32:01.943 [2024-10-07 11:42:43.485385] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:02.202 11:42:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:02.202 11:42:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:32:02.202 11:42:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:32:02.202 11:42:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:32:02.202 11:42:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:32:02.461 [2024-10-07 11:42:44.048830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:02.461 [2024-10-07 11:42:44.048871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:32:02.461 [2024-10-07 11:42:44.048886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:32:02.461 [2024-10-07 11:42:44.048897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.461 [2024-10-07 11:42:44.048922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:02.461 [2024-10-07 11:42:44.048933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:32:02.461 [2024-10-07 11:42:44.048944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:32:02.461 [2024-10-07 11:42:44.048954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.461 [2024-10-07 11:42:44.048974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:02.461 [2024-10-07 11:42:44.048989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:32:02.461 [2024-10-07 11:42:44.049000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:32:02.461 [2024-10-07 11:42:44.049010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.461 [2024-10-07 11:42:44.049065] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.233 ms, result 0 00:32:02.461 true 00:32:02.461 11:42:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:32:02.720 { 00:32:02.720 "name": "ftl", 00:32:02.720 "properties": [ 00:32:02.720 { 00:32:02.720 "name": "superblock_version", 00:32:02.720 "value": 5, 00:32:02.720 "read-only": true 00:32:02.720 }, 00:32:02.720 { 00:32:02.720 "name": "base_device", 00:32:02.720 "bands": [ 00:32:02.720 { 00:32:02.720 "id": 0, 00:32:02.720 "state": "CLOSED", 00:32:02.720 "validity": 1.0 00:32:02.720 }, 00:32:02.720 { 00:32:02.720 "id": 1, 00:32:02.720 "state": "CLOSED", 00:32:02.720 "validity": 1.0 00:32:02.720 }, 00:32:02.720 { 00:32:02.720 "id": 2, 00:32:02.720 "state": "CLOSED", 00:32:02.720 "validity": 0.007843137254901933 00:32:02.720 }, 00:32:02.720 { 00:32:02.720 "id": 3, 00:32:02.720 "state": "FREE", 00:32:02.720 "validity": 0.0 00:32:02.720 }, 00:32:02.720 { 00:32:02.720 "id": 4, 00:32:02.720 "state": "FREE", 00:32:02.720 "validity": 0.0 00:32:02.720 }, 00:32:02.720 { 00:32:02.720 "id": 5, 00:32:02.720 "state": "FREE", 00:32:02.720 "validity": 0.0 00:32:02.720 }, 00:32:02.720 { 00:32:02.720 "id": 6, 00:32:02.720 "state": "FREE", 00:32:02.720 "validity": 0.0 00:32:02.720 }, 00:32:02.720 { 00:32:02.720 "id": 7, 00:32:02.720 "state": "FREE", 00:32:02.720 "validity": 0.0 00:32:02.720 }, 00:32:02.720 { 00:32:02.720 "id": 8, 00:32:02.720 "state": "FREE", 00:32:02.720 "validity": 0.0 00:32:02.720 }, 00:32:02.720 { 00:32:02.720 "id": 9, 00:32:02.720 "state": "FREE", 00:32:02.720 "validity": 0.0 00:32:02.720 }, 00:32:02.720 { 00:32:02.720 "id": 10, 00:32:02.720 "state": "FREE", 00:32:02.720 "validity": 0.0 00:32:02.720 }, 00:32:02.720 { 00:32:02.720 "id": 11, 00:32:02.720 "state": "FREE", 00:32:02.720 "validity": 0.0 00:32:02.720 }, 00:32:02.720 { 00:32:02.720 "id": 12, 00:32:02.720 "state": "FREE", 00:32:02.720 "validity": 0.0 00:32:02.720 }, 00:32:02.720 { 00:32:02.720 "id": 13, 00:32:02.720 "state": "FREE", 00:32:02.720 "validity": 0.0 00:32:02.720 }, 00:32:02.720 { 00:32:02.720 "id": 14, 00:32:02.720 "state": "FREE", 00:32:02.720 "validity": 0.0 00:32:02.720 }, 00:32:02.720 { 00:32:02.720 "id": 15, 00:32:02.720 "state": "FREE", 00:32:02.720 "validity": 0.0 00:32:02.720 }, 00:32:02.720 { 00:32:02.720 "id": 16, 00:32:02.720 "state": "FREE", 00:32:02.720 "validity": 0.0 00:32:02.720 }, 00:32:02.720 { 00:32:02.720 "id": 17, 00:32:02.720 "state": "FREE", 00:32:02.720 "validity": 0.0 00:32:02.720 } 00:32:02.720 ], 00:32:02.720 "read-only": true 00:32:02.720 }, 00:32:02.720 { 00:32:02.720 "name": "cache_device", 00:32:02.720 "type": "bdev", 00:32:02.720 "chunks": [ 00:32:02.720 { 00:32:02.720 "id": 0, 00:32:02.720 "state": "INACTIVE", 00:32:02.720 "utilization": 0.0 00:32:02.720 }, 00:32:02.720 { 00:32:02.720 "id": 1, 00:32:02.720 "state": "OPEN", 00:32:02.720 "utilization": 0.0 00:32:02.720 }, 00:32:02.720 { 00:32:02.720 "id": 2, 00:32:02.720 "state": "OPEN", 00:32:02.720 "utilization": 0.0 00:32:02.720 }, 00:32:02.720 { 00:32:02.720 "id": 3, 00:32:02.720 "state": "FREE", 00:32:02.720 "utilization": 0.0 00:32:02.720 }, 00:32:02.720 { 00:32:02.720 "id": 4, 00:32:02.720 "state": "FREE", 00:32:02.720 "utilization": 0.0 00:32:02.720 } 00:32:02.720 ], 00:32:02.720 "read-only": true 00:32:02.720 }, 00:32:02.720 { 00:32:02.720 "name": "verbose_mode", 00:32:02.720 "value": true, 00:32:02.720 "unit": "", 00:32:02.720 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:32:02.720 }, 00:32:02.720 { 00:32:02.720 "name": "prep_upgrade_on_shutdown", 00:32:02.720 "value": false, 00:32:02.720 "unit": "", 00:32:02.720 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:32:02.720 } 00:32:02.720 ] 00:32:02.720 } 00:32:02.720 11:42:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:32:02.720 11:42:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:32:02.720 11:42:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:32:02.979 11:42:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:32:02.979 11:42:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:32:02.979 11:42:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:32:02.979 11:42:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:32:02.979 11:42:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:32:03.238 11:42:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:32:03.238 11:42:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:32:03.238 11:42:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:32:03.238 11:42:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:32:03.238 11:42:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:32:03.238 11:42:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:32:03.238 Validate MD5 checksum, iteration 1 00:32:03.238 11:42:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:32:03.238 11:42:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:32:03.238 11:42:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:03.238 11:42:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:03.238 11:42:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:03.238 11:42:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:03.238 11:42:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:32:03.238 [2024-10-07 11:42:44.863702] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:32:03.238 [2024-10-07 11:42:44.863863] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81905 ] 00:32:03.497 [2024-10-07 11:42:45.039105] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:03.755 [2024-10-07 11:42:45.268084] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:32:05.659  [2024-10-07T11:42:47.628Z] Copying: 675/1024 [MB] (675 MBps) [2024-10-07T11:42:49.528Z] Copying: 1024/1024 [MB] (average 670 MBps) 00:32:07.817 00:32:07.817 11:42:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:32:07.817 11:42:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:09.719 11:42:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:32:09.719 11:42:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=9263ad9e2ce20f0defe4d1cbf730c699 00:32:09.719 11:42:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 9263ad9e2ce20f0defe4d1cbf730c699 != \9\2\6\3\a\d\9\e\2\c\e\2\0\f\0\d\e\f\e\4\d\1\c\b\f\7\3\0\c\6\9\9 ]] 00:32:09.719 11:42:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:32:09.719 11:42:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:32:09.719 Validate MD5 checksum, iteration 2 00:32:09.719 11:42:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:32:09.719 11:42:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:32:09.719 11:42:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:09.719 11:42:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:09.719 11:42:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:09.719 11:42:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:09.719 11:42:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:32:09.719 [2024-10-07 11:42:51.032977] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:32:09.719 [2024-10-07 11:42:51.033096] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81972 ] 00:32:09.719 [2024-10-07 11:42:51.202416] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:09.719 [2024-10-07 11:42:51.423512] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:32:11.623  [2024-10-07T11:42:53.593Z] Copying: 680/1024 [MB] (680 MBps) [2024-10-07T11:42:54.971Z] Copying: 1024/1024 [MB] (average 682 MBps) 00:32:13.260 00:32:13.519 11:42:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:32:13.519 11:42:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:15.421 11:42:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:32:15.421 11:42:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=6e9c9829a975a18dda2e2bf56c761f85 00:32:15.422 11:42:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 6e9c9829a975a18dda2e2bf56c761f85 != \6\e\9\c\9\8\2\9\a\9\7\5\a\1\8\d\d\a\2\e\2\b\f\5\6\c\7\6\1\f\8\5 ]] 00:32:15.422 11:42:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:32:15.422 11:42:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:32:15.422 11:42:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:32:15.422 11:42:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 81816 ]] 00:32:15.422 11:42:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 81816 00:32:15.422 11:42:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:32:15.422 11:42:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:32:15.422 11:42:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:32:15.422 11:42:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:32:15.422 11:42:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:32:15.422 11:42:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=82033 00:32:15.422 11:42:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:32:15.422 11:42:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:15.422 11:42:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 82033 00:32:15.422 11:42:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 82033 ']' 00:32:15.422 11:42:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:15.422 11:42:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:15.422 11:42:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:15.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:15.422 11:42:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:15.422 11:42:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:15.422 [2024-10-07 11:42:56.886178] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:32:15.422 [2024-10-07 11:42:56.886311] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82033 ] 00:32:15.422 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 830: 81816 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:32:15.422 [2024-10-07 11:42:57.059397] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:15.680 [2024-10-07 11:42:57.272829] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:32:16.617 [2024-10-07 11:42:58.249270] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:32:16.617 [2024-10-07 11:42:58.249335] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:32:16.876 [2024-10-07 11:42:58.396048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:16.876 [2024-10-07 11:42:58.396092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:32:16.876 [2024-10-07 11:42:58.396110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:32:16.876 [2024-10-07 11:42:58.396121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:16.876 [2024-10-07 11:42:58.396176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:16.876 [2024-10-07 11:42:58.396189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:32:16.876 [2024-10-07 11:42:58.396200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:32:16.876 [2024-10-07 11:42:58.396210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:16.876 [2024-10-07 11:42:58.396242] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:32:16.876 [2024-10-07 11:42:58.397208] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:32:16.876 [2024-10-07 11:42:58.397236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:16.876 [2024-10-07 11:42:58.397248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:32:16.876 [2024-10-07 11:42:58.397259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.009 ms 00:32:16.876 [2024-10-07 11:42:58.397273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:16.876 [2024-10-07 11:42:58.397616] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:32:16.876 [2024-10-07 11:42:58.422563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:16.876 [2024-10-07 11:42:58.422601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:32:16.876 [2024-10-07 11:42:58.422616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.988 ms 00:32:16.876 [2024-10-07 11:42:58.422633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:16.876 [2024-10-07 11:42:58.436851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:16.876 [2024-10-07 11:42:58.436886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:32:16.876 [2024-10-07 11:42:58.436898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:32:16.876 [2024-10-07 11:42:58.436909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:16.876 [2024-10-07 11:42:58.437392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:16.876 [2024-10-07 11:42:58.437410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:32:16.876 [2024-10-07 11:42:58.437422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.399 ms 00:32:16.876 [2024-10-07 11:42:58.437432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:16.876 [2024-10-07 11:42:58.437488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:16.876 [2024-10-07 11:42:58.437502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:32:16.876 [2024-10-07 11:42:58.437513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:32:16.876 [2024-10-07 11:42:58.437523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:16.876 [2024-10-07 11:42:58.437552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:16.876 [2024-10-07 11:42:58.437573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:32:16.876 [2024-10-07 11:42:58.437584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:32:16.876 [2024-10-07 11:42:58.437596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:16.876 [2024-10-07 11:42:58.437619] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:32:16.876 [2024-10-07 11:42:58.441579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:16.876 [2024-10-07 11:42:58.441605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:32:16.876 [2024-10-07 11:42:58.441618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.972 ms 00:32:16.876 [2024-10-07 11:42:58.441627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:16.876 [2024-10-07 11:42:58.441660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:16.876 [2024-10-07 11:42:58.441671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:32:16.876 [2024-10-07 11:42:58.441682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:32:16.876 [2024-10-07 11:42:58.441691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:16.876 [2024-10-07 11:42:58.441729] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:32:16.876 [2024-10-07 11:42:58.441769] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:32:16.876 [2024-10-07 11:42:58.441807] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:32:16.876 [2024-10-07 11:42:58.441826] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:32:16.876 [2024-10-07 11:42:58.441913] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:32:16.876 [2024-10-07 11:42:58.441926] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:32:16.876 [2024-10-07 11:42:58.441939] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:32:16.876 [2024-10-07 11:42:58.441953] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:32:16.876 [2024-10-07 11:42:58.441965] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:32:16.876 [2024-10-07 11:42:58.441976] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:32:16.876 [2024-10-07 11:42:58.441989] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:32:16.876 [2024-10-07 11:42:58.441999] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:32:16.876 [2024-10-07 11:42:58.442009] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:32:16.876 [2024-10-07 11:42:58.442019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:16.876 [2024-10-07 11:42:58.442029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:32:16.876 [2024-10-07 11:42:58.442040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.293 ms 00:32:16.876 [2024-10-07 11:42:58.442050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:16.876 [2024-10-07 11:42:58.442124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:16.876 [2024-10-07 11:42:58.442134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:32:16.876 [2024-10-07 11:42:58.442145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:32:16.877 [2024-10-07 11:42:58.442159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:16.877 [2024-10-07 11:42:58.442248] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:32:16.877 [2024-10-07 11:42:58.442261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:32:16.877 [2024-10-07 11:42:58.442272] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:32:16.877 [2024-10-07 11:42:58.442299] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:16.877 [2024-10-07 11:42:58.442317] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:32:16.877 [2024-10-07 11:42:58.442333] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:32:16.877 [2024-10-07 11:42:58.442350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:32:16.877 [2024-10-07 11:42:58.442367] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:32:16.877 [2024-10-07 11:42:58.442384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:32:16.877 [2024-10-07 11:42:58.442394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:16.877 [2024-10-07 11:42:58.442404] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:32:16.877 [2024-10-07 11:42:58.442413] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:32:16.877 [2024-10-07 11:42:58.442422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:16.877 [2024-10-07 11:42:58.442432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:32:16.877 [2024-10-07 11:42:58.442442] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:32:16.877 [2024-10-07 11:42:58.442451] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:16.877 [2024-10-07 11:42:58.442460] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:32:16.877 [2024-10-07 11:42:58.442470] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:32:16.877 [2024-10-07 11:42:58.442479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:16.877 [2024-10-07 11:42:58.442489] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:32:16.877 [2024-10-07 11:42:58.442498] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:32:16.877 [2024-10-07 11:42:58.442507] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:16.877 [2024-10-07 11:42:58.442528] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:32:16.877 [2024-10-07 11:42:58.442539] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:32:16.877 [2024-10-07 11:42:58.442548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:16.877 [2024-10-07 11:42:58.442557] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:32:16.877 [2024-10-07 11:42:58.442566] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:32:16.877 [2024-10-07 11:42:58.442576] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:16.877 [2024-10-07 11:42:58.442585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:32:16.877 [2024-10-07 11:42:58.442594] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:32:16.877 [2024-10-07 11:42:58.442603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:16.877 [2024-10-07 11:42:58.442612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:32:16.877 [2024-10-07 11:42:58.442621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:32:16.877 [2024-10-07 11:42:58.442630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:16.877 [2024-10-07 11:42:58.442639] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:32:16.877 [2024-10-07 11:42:58.442652] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:32:16.877 [2024-10-07 11:42:58.442661] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:16.877 [2024-10-07 11:42:58.442670] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:32:16.877 [2024-10-07 11:42:58.442679] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:32:16.877 [2024-10-07 11:42:58.442688] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:16.877 [2024-10-07 11:42:58.442697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:32:16.877 [2024-10-07 11:42:58.442706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:32:16.877 [2024-10-07 11:42:58.442715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:16.877 [2024-10-07 11:42:58.442725] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:32:16.877 [2024-10-07 11:42:58.442736] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:32:16.877 [2024-10-07 11:42:58.442757] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:32:16.877 [2024-10-07 11:42:58.442767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:16.877 [2024-10-07 11:42:58.442778] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:32:16.877 [2024-10-07 11:42:58.442788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:32:16.877 [2024-10-07 11:42:58.442797] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:32:16.877 [2024-10-07 11:42:58.442807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:32:16.877 [2024-10-07 11:42:58.442816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:32:16.877 [2024-10-07 11:42:58.442825] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:32:16.877 [2024-10-07 11:42:58.442836] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:32:16.877 [2024-10-07 11:42:58.442853] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:16.877 [2024-10-07 11:42:58.442864] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:32:16.877 [2024-10-07 11:42:58.442875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:32:16.877 [2024-10-07 11:42:58.442885] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:32:16.877 [2024-10-07 11:42:58.442896] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:32:16.877 [2024-10-07 11:42:58.442906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:32:16.877 [2024-10-07 11:42:58.442917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:32:16.877 [2024-10-07 11:42:58.442927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:32:16.877 [2024-10-07 11:42:58.442937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:32:16.877 [2024-10-07 11:42:58.442947] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:32:16.877 [2024-10-07 11:42:58.442957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:32:16.877 [2024-10-07 11:42:58.442967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:32:16.877 [2024-10-07 11:42:58.442978] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:32:16.877 [2024-10-07 11:42:58.442991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:32:16.877 [2024-10-07 11:42:58.443001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:32:16.877 [2024-10-07 11:42:58.443011] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:32:16.877 [2024-10-07 11:42:58.443022] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:16.877 [2024-10-07 11:42:58.443033] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:16.877 [2024-10-07 11:42:58.443044] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:32:16.877 [2024-10-07 11:42:58.443055] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:32:16.877 [2024-10-07 11:42:58.443065] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:32:16.877 [2024-10-07 11:42:58.443076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:16.877 [2024-10-07 11:42:58.443087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:32:16.877 [2024-10-07 11:42:58.443097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.884 ms 00:32:16.877 [2024-10-07 11:42:58.443107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:16.877 [2024-10-07 11:42:58.480986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:16.877 [2024-10-07 11:42:58.481027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:32:16.877 [2024-10-07 11:42:58.481043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.884 ms 00:32:16.877 [2024-10-07 11:42:58.481055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:16.877 [2024-10-07 11:42:58.481104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:16.877 [2024-10-07 11:42:58.481115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:32:16.877 [2024-10-07 11:42:58.481126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:32:16.877 [2024-10-07 11:42:58.481141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:16.877 [2024-10-07 11:42:58.541429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:16.877 [2024-10-07 11:42:58.541470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:32:16.877 [2024-10-07 11:42:58.541485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 60.313 ms 00:32:16.877 [2024-10-07 11:42:58.541496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:16.877 [2024-10-07 11:42:58.541552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:16.877 [2024-10-07 11:42:58.541564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:32:16.877 [2024-10-07 11:42:58.541575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:32:16.877 [2024-10-07 11:42:58.541585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:16.877 [2024-10-07 11:42:58.541724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:16.877 [2024-10-07 11:42:58.541752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:32:16.877 [2024-10-07 11:42:58.541764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.057 ms 00:32:16.877 [2024-10-07 11:42:58.541774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:16.877 [2024-10-07 11:42:58.541818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:16.877 [2024-10-07 11:42:58.541833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:32:16.877 [2024-10-07 11:42:58.541844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:32:16.877 [2024-10-07 11:42:58.541854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:16.877 [2024-10-07 11:42:58.562530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:16.877 [2024-10-07 11:42:58.562568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:32:16.878 [2024-10-07 11:42:58.562583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.684 ms 00:32:16.878 [2024-10-07 11:42:58.562593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:16.878 [2024-10-07 11:42:58.562729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:16.878 [2024-10-07 11:42:58.562761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:32:16.878 [2024-10-07 11:42:58.562774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:32:16.878 [2024-10-07 11:42:58.562784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:17.137 [2024-10-07 11:42:58.587470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:17.137 [2024-10-07 11:42:58.587510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:32:17.137 [2024-10-07 11:42:58.587524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.700 ms 00:32:17.137 [2024-10-07 11:42:58.587534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:17.137 [2024-10-07 11:42:58.602389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:17.137 [2024-10-07 11:42:58.602430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:32:17.137 [2024-10-07 11:42:58.602443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.719 ms 00:32:17.137 [2024-10-07 11:42:58.602454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:17.137 [2024-10-07 11:42:58.688855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:17.137 [2024-10-07 11:42:58.688913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:32:17.137 [2024-10-07 11:42:58.688930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 86.473 ms 00:32:17.137 [2024-10-07 11:42:58.688941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:17.137 [2024-10-07 11:42:58.689116] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:32:17.137 [2024-10-07 11:42:58.689245] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:32:17.137 [2024-10-07 11:42:58.689358] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:32:17.137 [2024-10-07 11:42:58.689476] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:32:17.137 [2024-10-07 11:42:58.689494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:17.137 [2024-10-07 11:42:58.689513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:32:17.137 [2024-10-07 11:42:58.689524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.501 ms 00:32:17.137 [2024-10-07 11:42:58.689538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:17.137 [2024-10-07 11:42:58.689629] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:32:17.137 [2024-10-07 11:42:58.689645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:17.137 [2024-10-07 11:42:58.689655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:32:17.137 [2024-10-07 11:42:58.689666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:32:17.137 [2024-10-07 11:42:58.689676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:17.137 [2024-10-07 11:42:58.712182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:17.137 [2024-10-07 11:42:58.712227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:32:17.137 [2024-10-07 11:42:58.712242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.517 ms 00:32:17.137 [2024-10-07 11:42:58.712252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:17.137 [2024-10-07 11:42:58.726388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:17.137 [2024-10-07 11:42:58.726426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:32:17.137 [2024-10-07 11:42:58.726440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:32:17.137 [2024-10-07 11:42:58.726455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:17.137 [2024-10-07 11:42:58.726557] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:32:17.137 [2024-10-07 11:42:58.726760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:17.137 [2024-10-07 11:42:58.726776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:32:17.137 [2024-10-07 11:42:58.726787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.206 ms 00:32:17.137 [2024-10-07 11:42:58.726798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:17.704 [2024-10-07 11:42:59.273673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:17.704 [2024-10-07 11:42:59.273748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:32:17.704 [2024-10-07 11:42:59.273767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 546.612 ms 00:32:17.704 [2024-10-07 11:42:59.273778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:17.704 [2024-10-07 11:42:59.279378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:17.704 [2024-10-07 11:42:59.279430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:32:17.704 [2024-10-07 11:42:59.279444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.981 ms 00:32:17.704 [2024-10-07 11:42:59.279455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:17.704 [2024-10-07 11:42:59.279957] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:32:17.704 [2024-10-07 11:42:59.279995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:17.704 [2024-10-07 11:42:59.280006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:32:17.704 [2024-10-07 11:42:59.280018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.510 ms 00:32:17.704 [2024-10-07 11:42:59.280029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:17.704 [2024-10-07 11:42:59.280068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:17.704 [2024-10-07 11:42:59.280080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:32:17.704 [2024-10-07 11:42:59.280091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:32:17.704 [2024-10-07 11:42:59.280102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:17.704 [2024-10-07 11:42:59.280138] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 554.481 ms, result 0 00:32:17.704 [2024-10-07 11:42:59.280178] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:32:17.704 [2024-10-07 11:42:59.280252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:17.704 [2024-10-07 11:42:59.280262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:32:17.704 [2024-10-07 11:42:59.280272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.074 ms 00:32:17.704 [2024-10-07 11:42:59.280281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:18.270 [2024-10-07 11:42:59.838379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:18.270 [2024-10-07 11:42:59.838449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:32:18.270 [2024-10-07 11:42:59.838467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 557.819 ms 00:32:18.270 [2024-10-07 11:42:59.838478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:18.270 [2024-10-07 11:42:59.844155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:18.270 [2024-10-07 11:42:59.844195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:32:18.270 [2024-10-07 11:42:59.844208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.056 ms 00:32:18.270 [2024-10-07 11:42:59.844218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:18.270 [2024-10-07 11:42:59.844734] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:32:18.270 [2024-10-07 11:42:59.844800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:18.270 [2024-10-07 11:42:59.844811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:32:18.270 [2024-10-07 11:42:59.844822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.552 ms 00:32:18.270 [2024-10-07 11:42:59.844832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:18.270 [2024-10-07 11:42:59.844864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:18.270 [2024-10-07 11:42:59.844876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:32:18.270 [2024-10-07 11:42:59.844887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:32:18.270 [2024-10-07 11:42:59.844896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:18.270 [2024-10-07 11:42:59.844933] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 565.668 ms, result 0 00:32:18.270 [2024-10-07 11:42:59.844975] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:32:18.270 [2024-10-07 11:42:59.844988] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:32:18.270 [2024-10-07 11:42:59.845001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:18.270 [2024-10-07 11:42:59.845013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:32:18.270 [2024-10-07 11:42:59.845028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1120.282 ms 00:32:18.270 [2024-10-07 11:42:59.845038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:18.270 [2024-10-07 11:42:59.845069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:18.270 [2024-10-07 11:42:59.845081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:32:18.270 [2024-10-07 11:42:59.845092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:32:18.270 [2024-10-07 11:42:59.845101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:18.270 [2024-10-07 11:42:59.856660] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:32:18.270 [2024-10-07 11:42:59.856811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:18.270 [2024-10-07 11:42:59.856826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:32:18.270 [2024-10-07 11:42:59.856838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.710 ms 00:32:18.270 [2024-10-07 11:42:59.856849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:18.270 [2024-10-07 11:42:59.857449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:18.270 [2024-10-07 11:42:59.857492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:32:18.270 [2024-10-07 11:42:59.857504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.518 ms 00:32:18.270 [2024-10-07 11:42:59.857514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:18.270 [2024-10-07 11:42:59.859549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:18.270 [2024-10-07 11:42:59.859578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:32:18.270 [2024-10-07 11:42:59.859590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.016 ms 00:32:18.270 [2024-10-07 11:42:59.859600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:18.270 [2024-10-07 11:42:59.859650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:18.270 [2024-10-07 11:42:59.859662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:32:18.270 [2024-10-07 11:42:59.859672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:32:18.270 [2024-10-07 11:42:59.859683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:18.270 [2024-10-07 11:42:59.859793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:18.270 [2024-10-07 11:42:59.859806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:32:18.270 [2024-10-07 11:42:59.859816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:32:18.270 [2024-10-07 11:42:59.859827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:18.270 [2024-10-07 11:42:59.859850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:18.270 [2024-10-07 11:42:59.859864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:32:18.270 [2024-10-07 11:42:59.859874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:32:18.270 [2024-10-07 11:42:59.859884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:18.270 [2024-10-07 11:42:59.859915] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:32:18.270 [2024-10-07 11:42:59.859927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:18.270 [2024-10-07 11:42:59.859937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:32:18.270 [2024-10-07 11:42:59.859947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:32:18.270 [2024-10-07 11:42:59.859958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:18.270 [2024-10-07 11:42:59.860013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:18.270 [2024-10-07 11:42:59.860025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:32:18.270 [2024-10-07 11:42:59.860042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:32:18.270 [2024-10-07 11:42:59.860052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:18.270 [2024-10-07 11:42:59.861052] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1466.915 ms, result 0 00:32:18.270 [2024-10-07 11:42:59.873414] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:18.270 [2024-10-07 11:42:59.889377] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:32:18.270 [2024-10-07 11:42:59.898960] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:18.270 11:42:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:18.270 11:42:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:32:18.270 11:42:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:32:18.270 11:42:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:32:18.270 11:42:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:32:18.270 11:42:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:32:18.270 Validate MD5 checksum, iteration 1 00:32:18.270 11:42:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:32:18.270 11:42:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:32:18.270 11:42:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:32:18.270 11:42:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:32:18.270 11:42:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:18.270 11:42:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:18.270 11:42:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:18.270 11:42:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:18.270 11:42:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:32:18.528 [2024-10-07 11:43:00.037607] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:32:18.528 [2024-10-07 11:43:00.037721] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82073 ] 00:32:18.528 [2024-10-07 11:43:00.209113] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:18.787 [2024-10-07 11:43:00.422139] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:32:20.690  [2024-10-07T11:43:02.660Z] Copying: 691/1024 [MB] (691 MBps) [2024-10-07T11:43:07.987Z] Copying: 1024/1024 [MB] (average 687 MBps) 00:32:26.276 00:32:26.276 11:43:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:32:26.276 11:43:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:27.650 11:43:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:32:27.650 Validate MD5 checksum, iteration 2 00:32:27.650 11:43:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=9263ad9e2ce20f0defe4d1cbf730c699 00:32:27.650 11:43:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 9263ad9e2ce20f0defe4d1cbf730c699 != \9\2\6\3\a\d\9\e\2\c\e\2\0\f\0\d\e\f\e\4\d\1\c\b\f\7\3\0\c\6\9\9 ]] 00:32:27.650 11:43:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:32:27.650 11:43:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:32:27.650 11:43:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:32:27.650 11:43:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:32:27.650 11:43:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:27.650 11:43:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:27.650 11:43:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:27.650 11:43:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:27.650 11:43:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:32:27.650 [2024-10-07 11:43:09.194205] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:32:27.650 [2024-10-07 11:43:09.194495] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82169 ] 00:32:27.909 [2024-10-07 11:43:09.365851] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:27.909 [2024-10-07 11:43:09.598100] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:32:29.811  [2024-10-07T11:43:11.780Z] Copying: 692/1024 [MB] (692 MBps) [2024-10-07T11:43:13.683Z] Copying: 1024/1024 [MB] (average 693 MBps) 00:32:31.972 00:32:31.972 11:43:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:32:31.972 11:43:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:33.350 11:43:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:32:33.351 11:43:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=6e9c9829a975a18dda2e2bf56c761f85 00:32:33.351 11:43:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 6e9c9829a975a18dda2e2bf56c761f85 != \6\e\9\c\9\8\2\9\a\9\7\5\a\1\8\d\d\a\2\e\2\b\f\5\6\c\7\6\1\f\8\5 ]] 00:32:33.351 11:43:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:32:33.351 11:43:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:32:33.351 11:43:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:32:33.351 11:43:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:32:33.351 11:43:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:32:33.351 11:43:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:33.610 11:43:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:32:33.610 11:43:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:32:33.610 11:43:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:32:33.610 11:43:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:32:33.610 11:43:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 82033 ]] 00:32:33.610 11:43:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 82033 00:32:33.610 11:43:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 82033 ']' 00:32:33.610 11:43:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 82033 00:32:33.610 11:43:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:32:33.610 11:43:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:33.610 11:43:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82033 00:32:33.610 killing process with pid 82033 00:32:33.610 11:43:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:33.610 11:43:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:33.610 11:43:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82033' 00:32:33.610 11:43:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 82033 00:32:33.610 11:43:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 82033 00:32:34.548 [2024-10-07 11:43:16.229666] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:32:34.548 [2024-10-07 11:43:16.250219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:34.548 [2024-10-07 11:43:16.250264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:32:34.548 [2024-10-07 11:43:16.250293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:32:34.548 [2024-10-07 11:43:16.250315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:34.548 [2024-10-07 11:43:16.250351] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:32:34.548 [2024-10-07 11:43:16.254502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:34.548 [2024-10-07 11:43:16.254537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:32:34.548 [2024-10-07 11:43:16.254550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.137 ms 00:32:34.548 [2024-10-07 11:43:16.254561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:34.548 [2024-10-07 11:43:16.254784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:34.548 [2024-10-07 11:43:16.254800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:32:34.548 [2024-10-07 11:43:16.254811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.195 ms 00:32:34.548 [2024-10-07 11:43:16.254821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:34.548 [2024-10-07 11:43:16.255896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:34.548 [2024-10-07 11:43:16.255937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:32:34.548 [2024-10-07 11:43:16.255949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.058 ms 00:32:34.548 [2024-10-07 11:43:16.255960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:34.548 [2024-10-07 11:43:16.256893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:34.548 [2024-10-07 11:43:16.257034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:32:34.548 [2024-10-07 11:43:16.257053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.898 ms 00:32:34.548 [2024-10-07 11:43:16.257064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:34.807 [2024-10-07 11:43:16.272247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:34.807 [2024-10-07 11:43:16.272390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:32:34.807 [2024-10-07 11:43:16.272412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.140 ms 00:32:34.807 [2024-10-07 11:43:16.272424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:34.807 [2024-10-07 11:43:16.280331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:34.807 [2024-10-07 11:43:16.280369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:32:34.807 [2024-10-07 11:43:16.280383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.860 ms 00:32:34.807 [2024-10-07 11:43:16.280394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:34.807 [2024-10-07 11:43:16.280506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:34.807 [2024-10-07 11:43:16.280527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:32:34.807 [2024-10-07 11:43:16.280539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.073 ms 00:32:34.807 [2024-10-07 11:43:16.280550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:34.807 [2024-10-07 11:43:16.295147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:34.807 [2024-10-07 11:43:16.295302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:32:34.807 [2024-10-07 11:43:16.295322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.602 ms 00:32:34.807 [2024-10-07 11:43:16.295332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:34.807 [2024-10-07 11:43:16.309975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:34.807 [2024-10-07 11:43:16.310112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:32:34.807 [2024-10-07 11:43:16.310133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.604 ms 00:32:34.807 [2024-10-07 11:43:16.310145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:34.808 [2024-10-07 11:43:16.325376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:34.808 [2024-10-07 11:43:16.325517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:32:34.808 [2024-10-07 11:43:16.325538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.198 ms 00:32:34.808 [2024-10-07 11:43:16.325548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:34.808 [2024-10-07 11:43:16.340350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:34.808 [2024-10-07 11:43:16.340494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:32:34.808 [2024-10-07 11:43:16.340514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.700 ms 00:32:34.808 [2024-10-07 11:43:16.340525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:34.808 [2024-10-07 11:43:16.340614] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:32:34.808 [2024-10-07 11:43:16.340632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:32:34.808 [2024-10-07 11:43:16.340646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:32:34.808 [2024-10-07 11:43:16.340657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:32:34.808 [2024-10-07 11:43:16.340668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:34.808 [2024-10-07 11:43:16.340679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:34.808 [2024-10-07 11:43:16.340690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:34.808 [2024-10-07 11:43:16.340701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:34.808 [2024-10-07 11:43:16.340711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:34.808 [2024-10-07 11:43:16.340722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:34.808 [2024-10-07 11:43:16.340732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:34.808 [2024-10-07 11:43:16.340757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:34.808 [2024-10-07 11:43:16.340768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:34.808 [2024-10-07 11:43:16.340778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:34.808 [2024-10-07 11:43:16.340789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:34.808 [2024-10-07 11:43:16.340799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:34.808 [2024-10-07 11:43:16.340809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:34.808 [2024-10-07 11:43:16.340820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:34.808 [2024-10-07 11:43:16.340830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:34.808 [2024-10-07 11:43:16.340844] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:32:34.808 [2024-10-07 11:43:16.340854] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: fc0d9e76-6a2e-491c-aad8-b95ec45f2ca9 00:32:34.808 [2024-10-07 11:43:16.340865] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:32:34.808 [2024-10-07 11:43:16.340875] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:32:34.808 [2024-10-07 11:43:16.340885] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:32:34.808 [2024-10-07 11:43:16.340901] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:32:34.808 [2024-10-07 11:43:16.340911] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:32:34.808 [2024-10-07 11:43:16.340922] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:32:34.808 [2024-10-07 11:43:16.340932] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:32:34.808 [2024-10-07 11:43:16.340941] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:32:34.808 [2024-10-07 11:43:16.340950] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:32:34.808 [2024-10-07 11:43:16.340961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:34.808 [2024-10-07 11:43:16.340973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:32:34.808 [2024-10-07 11:43:16.340985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.349 ms 00:32:34.808 [2024-10-07 11:43:16.340995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:34.808 [2024-10-07 11:43:16.360681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:34.808 [2024-10-07 11:43:16.360831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:32:34.808 [2024-10-07 11:43:16.360905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.684 ms 00:32:34.808 [2024-10-07 11:43:16.360941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:34.808 [2024-10-07 11:43:16.361591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:34.808 [2024-10-07 11:43:16.361674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:32:34.808 [2024-10-07 11:43:16.361749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.605 ms 00:32:34.808 [2024-10-07 11:43:16.361787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:34.808 [2024-10-07 11:43:16.421157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:34.808 [2024-10-07 11:43:16.421345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:32:34.808 [2024-10-07 11:43:16.421472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:34.808 [2024-10-07 11:43:16.421510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:34.808 [2024-10-07 11:43:16.421574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:34.808 [2024-10-07 11:43:16.421607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:32:34.808 [2024-10-07 11:43:16.421637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:34.808 [2024-10-07 11:43:16.421666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:34.808 [2024-10-07 11:43:16.421851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:34.808 [2024-10-07 11:43:16.421901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:32:34.808 [2024-10-07 11:43:16.421939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:34.808 [2024-10-07 11:43:16.421968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:34.808 [2024-10-07 11:43:16.422108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:34.808 [2024-10-07 11:43:16.422141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:32:34.808 [2024-10-07 11:43:16.422171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:34.808 [2024-10-07 11:43:16.422248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:35.068 [2024-10-07 11:43:16.547653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:35.068 [2024-10-07 11:43:16.547900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:32:35.068 [2024-10-07 11:43:16.547977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:35.068 [2024-10-07 11:43:16.548014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:35.068 [2024-10-07 11:43:16.649188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:35.068 [2024-10-07 11:43:16.649393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:32:35.068 [2024-10-07 11:43:16.649515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:35.068 [2024-10-07 11:43:16.649554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:35.068 [2024-10-07 11:43:16.649696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:35.068 [2024-10-07 11:43:16.649947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:32:35.068 [2024-10-07 11:43:16.649989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:35.068 [2024-10-07 11:43:16.650027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:35.068 [2024-10-07 11:43:16.650129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:35.068 [2024-10-07 11:43:16.650197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:32:35.068 [2024-10-07 11:43:16.650271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:35.068 [2024-10-07 11:43:16.650313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:35.068 [2024-10-07 11:43:16.650452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:35.068 [2024-10-07 11:43:16.650490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:32:35.068 [2024-10-07 11:43:16.650584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:35.068 [2024-10-07 11:43:16.650619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:35.068 [2024-10-07 11:43:16.650709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:35.068 [2024-10-07 11:43:16.650758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:32:35.068 [2024-10-07 11:43:16.650793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:35.068 [2024-10-07 11:43:16.650895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:35.068 [2024-10-07 11:43:16.650956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:35.068 [2024-10-07 11:43:16.650988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:32:35.068 [2024-10-07 11:43:16.651019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:35.068 [2024-10-07 11:43:16.651095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:35.068 [2024-10-07 11:43:16.651226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:35.068 [2024-10-07 11:43:16.651265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:32:35.068 [2024-10-07 11:43:16.651338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:35.068 [2024-10-07 11:43:16.651372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:35.068 [2024-10-07 11:43:16.651567] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 401.962 ms, result 0 00:32:36.446 11:43:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:32:36.446 11:43:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:36.446 11:43:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:32:36.446 11:43:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:32:36.446 11:43:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:32:36.446 11:43:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:32:36.446 Remove shared memory files 00:32:36.446 11:43:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:32:36.446 11:43:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:32:36.446 11:43:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:32:36.446 11:43:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:32:36.446 11:43:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid81816 00:32:36.446 11:43:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:32:36.446 11:43:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:32:36.446 ************************************ 00:32:36.446 END TEST ftl_upgrade_shutdown 00:32:36.446 ************************************ 00:32:36.446 00:32:36.446 real 1m31.260s 00:32:36.446 user 2m6.609s 00:32:36.446 sys 0m21.782s 00:32:36.446 11:43:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:36.446 11:43:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:36.705 11:43:18 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:32:36.705 11:43:18 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:32:36.705 11:43:18 ftl -- ftl/ftl.sh@14 -- # killprocess 74704 00:32:36.705 11:43:18 ftl -- common/autotest_common.sh@950 -- # '[' -z 74704 ']' 00:32:36.705 11:43:18 ftl -- common/autotest_common.sh@954 -- # kill -0 74704 00:32:36.705 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (74704) - No such process 00:32:36.705 Process with pid 74704 is not found 00:32:36.705 11:43:18 ftl -- common/autotest_common.sh@977 -- # echo 'Process with pid 74704 is not found' 00:32:36.705 11:43:18 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:32:36.705 11:43:18 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=82299 00:32:36.705 11:43:18 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:36.705 11:43:18 ftl -- ftl/ftl.sh@20 -- # waitforlisten 82299 00:32:36.705 11:43:18 ftl -- common/autotest_common.sh@831 -- # '[' -z 82299 ']' 00:32:36.705 11:43:18 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:36.705 11:43:18 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:36.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:36.705 11:43:18 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:36.705 11:43:18 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:36.705 11:43:18 ftl -- common/autotest_common.sh@10 -- # set +x 00:32:36.705 [2024-10-07 11:43:18.272500] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:32:36.705 [2024-10-07 11:43:18.272630] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82299 ] 00:32:36.964 [2024-10-07 11:43:18.445519] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:36.964 [2024-10-07 11:43:18.669776] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:32:37.900 11:43:19 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:37.900 11:43:19 ftl -- common/autotest_common.sh@864 -- # return 0 00:32:37.900 11:43:19 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:32:38.158 nvme0n1 00:32:38.158 11:43:19 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:32:38.158 11:43:19 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:38.158 11:43:19 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:32:38.416 11:43:20 ftl -- ftl/common.sh@28 -- # stores=d1306d2b-3ff9-4f54-a0b8-9b5c3cd1671b 00:32:38.416 11:43:20 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:32:38.416 11:43:20 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d1306d2b-3ff9-4f54-a0b8-9b5c3cd1671b 00:32:38.675 11:43:20 ftl -- ftl/ftl.sh@23 -- # killprocess 82299 00:32:38.675 11:43:20 ftl -- common/autotest_common.sh@950 -- # '[' -z 82299 ']' 00:32:38.675 11:43:20 ftl -- common/autotest_common.sh@954 -- # kill -0 82299 00:32:38.675 11:43:20 ftl -- common/autotest_common.sh@955 -- # uname 00:32:38.675 11:43:20 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:38.675 11:43:20 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82299 00:32:38.675 killing process with pid 82299 00:32:38.675 11:43:20 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:38.675 11:43:20 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:38.675 11:43:20 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82299' 00:32:38.675 11:43:20 ftl -- common/autotest_common.sh@969 -- # kill 82299 00:32:38.675 11:43:20 ftl -- common/autotest_common.sh@974 -- # wait 82299 00:32:41.209 11:43:22 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:32:41.468 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:41.768 Waiting for block devices as requested 00:32:41.768 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:32:41.768 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:32:42.026 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:32:42.026 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:32:47.296 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:32:47.296 11:43:28 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:32:47.296 Remove shared memory files 00:32:47.296 11:43:28 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:32:47.296 11:43:28 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:32:47.296 11:43:28 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:32:47.296 11:43:28 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:32:47.296 11:43:28 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:32:47.296 11:43:28 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:32:47.296 ************************************ 00:32:47.296 END TEST ftl 00:32:47.296 ************************************ 00:32:47.296 00:32:47.296 real 11m10.907s 00:32:47.296 user 13m47.043s 00:32:47.296 sys 1m29.079s 00:32:47.296 11:43:28 ftl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:47.296 11:43:28 ftl -- common/autotest_common.sh@10 -- # set +x 00:32:47.296 11:43:28 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:32:47.296 11:43:28 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:32:47.296 11:43:28 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:32:47.296 11:43:28 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:32:47.296 11:43:28 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:32:47.296 11:43:28 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:32:47.296 11:43:28 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:32:47.296 11:43:28 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:32:47.296 11:43:28 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:32:47.296 11:43:28 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:32:47.296 11:43:28 -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:47.296 11:43:28 -- common/autotest_common.sh@10 -- # set +x 00:32:47.296 11:43:28 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:32:47.296 11:43:28 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:32:47.296 11:43:28 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:32:47.296 11:43:28 -- common/autotest_common.sh@10 -- # set +x 00:32:49.829 INFO: APP EXITING 00:32:49.829 INFO: killing all VMs 00:32:49.829 INFO: killing vhost app 00:32:49.829 INFO: EXIT DONE 00:32:49.829 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:50.397 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:32:50.397 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:32:50.397 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:32:50.397 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:32:50.656 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:51.225 Cleaning 00:32:51.225 Removing: /var/run/dpdk/spdk0/config 00:32:51.225 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:51.225 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:51.225 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:51.225 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:51.225 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:51.225 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:51.226 Removing: /var/run/dpdk/spdk0 00:32:51.226 Removing: /var/run/dpdk/spdk_pid58000 00:32:51.226 Removing: /var/run/dpdk/spdk_pid58251 00:32:51.226 Removing: /var/run/dpdk/spdk_pid58486 00:32:51.226 Removing: /var/run/dpdk/spdk_pid58601 00:32:51.226 Removing: /var/run/dpdk/spdk_pid58657 00:32:51.226 Removing: /var/run/dpdk/spdk_pid58796 00:32:51.226 Removing: /var/run/dpdk/spdk_pid58819 00:32:51.226 Removing: /var/run/dpdk/spdk_pid59035 00:32:51.226 Removing: /var/run/dpdk/spdk_pid59159 00:32:51.226 Removing: /var/run/dpdk/spdk_pid59277 00:32:51.226 Removing: /var/run/dpdk/spdk_pid59410 00:32:51.226 Removing: /var/run/dpdk/spdk_pid59518 00:32:51.226 Removing: /var/run/dpdk/spdk_pid59563 00:32:51.226 Removing: /var/run/dpdk/spdk_pid59605 00:32:51.226 Removing: /var/run/dpdk/spdk_pid59681 00:32:51.226 Removing: /var/run/dpdk/spdk_pid59804 00:32:51.226 Removing: /var/run/dpdk/spdk_pid60273 00:32:51.226 Removing: /var/run/dpdk/spdk_pid60355 00:32:51.226 Removing: /var/run/dpdk/spdk_pid60440 00:32:51.226 Removing: /var/run/dpdk/spdk_pid60456 00:32:51.226 Removing: /var/run/dpdk/spdk_pid60626 00:32:51.226 Removing: /var/run/dpdk/spdk_pid60647 00:32:51.226 Removing: /var/run/dpdk/spdk_pid60818 00:32:51.226 Removing: /var/run/dpdk/spdk_pid60845 00:32:51.226 Removing: /var/run/dpdk/spdk_pid60920 00:32:51.226 Removing: /var/run/dpdk/spdk_pid60938 00:32:51.226 Removing: /var/run/dpdk/spdk_pid61008 00:32:51.226 Removing: /var/run/dpdk/spdk_pid61031 00:32:51.226 Removing: /var/run/dpdk/spdk_pid61243 00:32:51.226 Removing: /var/run/dpdk/spdk_pid61285 00:32:51.226 Removing: /var/run/dpdk/spdk_pid61374 00:32:51.226 Removing: /var/run/dpdk/spdk_pid61568 00:32:51.226 Removing: /var/run/dpdk/spdk_pid61674 00:32:51.226 Removing: /var/run/dpdk/spdk_pid61721 00:32:51.485 Removing: /var/run/dpdk/spdk_pid62186 00:32:51.485 Removing: /var/run/dpdk/spdk_pid62295 00:32:51.485 Removing: /var/run/dpdk/spdk_pid62410 00:32:51.485 Removing: /var/run/dpdk/spdk_pid62468 00:32:51.485 Removing: /var/run/dpdk/spdk_pid62495 00:32:51.485 Removing: /var/run/dpdk/spdk_pid62583 00:32:51.485 Removing: /var/run/dpdk/spdk_pid63236 00:32:51.485 Removing: /var/run/dpdk/spdk_pid63278 00:32:51.485 Removing: /var/run/dpdk/spdk_pid63787 00:32:51.485 Removing: /var/run/dpdk/spdk_pid63891 00:32:51.485 Removing: /var/run/dpdk/spdk_pid64006 00:32:51.485 Removing: /var/run/dpdk/spdk_pid64064 00:32:51.485 Removing: /var/run/dpdk/spdk_pid64095 00:32:51.485 Removing: /var/run/dpdk/spdk_pid64126 00:32:51.485 Removing: /var/run/dpdk/spdk_pid66015 00:32:51.485 Removing: /var/run/dpdk/spdk_pid66170 00:32:51.485 Removing: /var/run/dpdk/spdk_pid66179 00:32:51.485 Removing: /var/run/dpdk/spdk_pid66191 00:32:51.485 Removing: /var/run/dpdk/spdk_pid66238 00:32:51.485 Removing: /var/run/dpdk/spdk_pid66242 00:32:51.485 Removing: /var/run/dpdk/spdk_pid66254 00:32:51.485 Removing: /var/run/dpdk/spdk_pid66299 00:32:51.485 Removing: /var/run/dpdk/spdk_pid66303 00:32:51.485 Removing: /var/run/dpdk/spdk_pid66315 00:32:51.485 Removing: /var/run/dpdk/spdk_pid66362 00:32:51.485 Removing: /var/run/dpdk/spdk_pid66366 00:32:51.485 Removing: /var/run/dpdk/spdk_pid66378 00:32:51.485 Removing: /var/run/dpdk/spdk_pid67791 00:32:51.485 Removing: /var/run/dpdk/spdk_pid67907 00:32:51.485 Removing: /var/run/dpdk/spdk_pid69337 00:32:51.485 Removing: /var/run/dpdk/spdk_pid70710 00:32:51.485 Removing: /var/run/dpdk/spdk_pid70821 00:32:51.485 Removing: /var/run/dpdk/spdk_pid70941 00:32:51.485 Removing: /var/run/dpdk/spdk_pid71056 00:32:51.485 Removing: /var/run/dpdk/spdk_pid71188 00:32:51.485 Removing: /var/run/dpdk/spdk_pid71274 00:32:51.485 Removing: /var/run/dpdk/spdk_pid71422 00:32:51.485 Removing: /var/run/dpdk/spdk_pid71804 00:32:51.485 Removing: /var/run/dpdk/spdk_pid71846 00:32:51.485 Removing: /var/run/dpdk/spdk_pid72321 00:32:51.485 Removing: /var/run/dpdk/spdk_pid72514 00:32:51.485 Removing: /var/run/dpdk/spdk_pid72625 00:32:51.485 Removing: /var/run/dpdk/spdk_pid72738 00:32:51.485 Removing: /var/run/dpdk/spdk_pid72797 00:32:51.485 Removing: /var/run/dpdk/spdk_pid72828 00:32:51.485 Removing: /var/run/dpdk/spdk_pid73142 00:32:51.485 Removing: /var/run/dpdk/spdk_pid73216 00:32:51.485 Removing: /var/run/dpdk/spdk_pid73309 00:32:51.485 Removing: /var/run/dpdk/spdk_pid73746 00:32:51.485 Removing: /var/run/dpdk/spdk_pid73893 00:32:51.485 Removing: /var/run/dpdk/spdk_pid74704 00:32:51.485 Removing: /var/run/dpdk/spdk_pid74853 00:32:51.485 Removing: /var/run/dpdk/spdk_pid75081 00:32:51.485 Removing: /var/run/dpdk/spdk_pid75185 00:32:51.485 Removing: /var/run/dpdk/spdk_pid75515 00:32:51.485 Removing: /var/run/dpdk/spdk_pid75780 00:32:51.485 Removing: /var/run/dpdk/spdk_pid76132 00:32:51.485 Removing: /var/run/dpdk/spdk_pid76337 00:32:51.485 Removing: /var/run/dpdk/spdk_pid76473 00:32:51.485 Removing: /var/run/dpdk/spdk_pid76542 00:32:51.485 Removing: /var/run/dpdk/spdk_pid76680 00:32:51.485 Removing: /var/run/dpdk/spdk_pid76716 00:32:51.744 Removing: /var/run/dpdk/spdk_pid76780 00:32:51.744 Removing: /var/run/dpdk/spdk_pid76984 00:32:51.745 Removing: /var/run/dpdk/spdk_pid77231 00:32:51.745 Removing: /var/run/dpdk/spdk_pid77642 00:32:51.745 Removing: /var/run/dpdk/spdk_pid78052 00:32:51.745 Removing: /var/run/dpdk/spdk_pid78481 00:32:51.745 Removing: /var/run/dpdk/spdk_pid78978 00:32:51.745 Removing: /var/run/dpdk/spdk_pid79122 00:32:51.745 Removing: /var/run/dpdk/spdk_pid79216 00:32:51.745 Removing: /var/run/dpdk/spdk_pid79846 00:32:51.745 Removing: /var/run/dpdk/spdk_pid79921 00:32:51.745 Removing: /var/run/dpdk/spdk_pid80363 00:32:51.745 Removing: /var/run/dpdk/spdk_pid80745 00:32:51.745 Removing: /var/run/dpdk/spdk_pid81235 00:32:51.745 Removing: /var/run/dpdk/spdk_pid81363 00:32:51.745 Removing: /var/run/dpdk/spdk_pid81424 00:32:51.745 Removing: /var/run/dpdk/spdk_pid81488 00:32:51.745 Removing: /var/run/dpdk/spdk_pid81545 00:32:51.745 Removing: /var/run/dpdk/spdk_pid81613 00:32:51.745 Removing: /var/run/dpdk/spdk_pid81816 00:32:51.745 Removing: /var/run/dpdk/spdk_pid81905 00:32:51.745 Removing: /var/run/dpdk/spdk_pid81972 00:32:51.745 Removing: /var/run/dpdk/spdk_pid82033 00:32:51.745 Removing: /var/run/dpdk/spdk_pid82073 00:32:51.745 Removing: /var/run/dpdk/spdk_pid82169 00:32:51.745 Removing: /var/run/dpdk/spdk_pid82299 00:32:51.745 Clean 00:32:51.745 11:43:33 -- common/autotest_common.sh@1451 -- # return 0 00:32:51.745 11:43:33 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:32:51.745 11:43:33 -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:51.745 11:43:33 -- common/autotest_common.sh@10 -- # set +x 00:32:51.745 11:43:33 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:32:51.745 11:43:33 -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:51.745 11:43:33 -- common/autotest_common.sh@10 -- # set +x 00:32:52.050 11:43:33 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:32:52.050 11:43:33 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:32:52.050 11:43:33 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:32:52.050 11:43:33 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:32:52.050 11:43:33 -- spdk/autotest.sh@394 -- # hostname 00:32:52.050 11:43:33 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:32:52.050 geninfo: WARNING: invalid characters removed from testname! 00:33:18.612 11:43:59 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:21.143 11:44:02 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:23.045 11:44:04 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:25.577 11:44:06 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:27.481 11:44:09 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:30.014 11:44:11 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:32.575 11:44:13 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:32.575 11:44:13 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:33:32.575 11:44:13 -- common/autotest_common.sh@1681 -- $ lcov --version 00:33:32.575 11:44:13 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:33:32.575 11:44:13 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:33:32.575 11:44:13 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:33:32.575 11:44:13 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:33:32.575 11:44:13 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:33:32.575 11:44:13 -- scripts/common.sh@336 -- $ IFS=.-: 00:33:32.575 11:44:13 -- scripts/common.sh@336 -- $ read -ra ver1 00:33:32.576 11:44:13 -- scripts/common.sh@337 -- $ IFS=.-: 00:33:32.576 11:44:13 -- scripts/common.sh@337 -- $ read -ra ver2 00:33:32.576 11:44:13 -- scripts/common.sh@338 -- $ local 'op=<' 00:33:32.576 11:44:13 -- scripts/common.sh@340 -- $ ver1_l=2 00:33:32.576 11:44:13 -- scripts/common.sh@341 -- $ ver2_l=1 00:33:32.576 11:44:13 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:33:32.576 11:44:13 -- scripts/common.sh@344 -- $ case "$op" in 00:33:32.576 11:44:13 -- scripts/common.sh@345 -- $ : 1 00:33:32.576 11:44:13 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:33:32.576 11:44:13 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:32.576 11:44:13 -- scripts/common.sh@365 -- $ decimal 1 00:33:32.576 11:44:13 -- scripts/common.sh@353 -- $ local d=1 00:33:32.576 11:44:13 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:33:32.576 11:44:13 -- scripts/common.sh@355 -- $ echo 1 00:33:32.576 11:44:13 -- scripts/common.sh@365 -- $ ver1[v]=1 00:33:32.576 11:44:13 -- scripts/common.sh@366 -- $ decimal 2 00:33:32.576 11:44:13 -- scripts/common.sh@353 -- $ local d=2 00:33:32.576 11:44:13 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:33:32.576 11:44:13 -- scripts/common.sh@355 -- $ echo 2 00:33:32.576 11:44:13 -- scripts/common.sh@366 -- $ ver2[v]=2 00:33:32.576 11:44:13 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:33:32.576 11:44:13 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:33:32.576 11:44:13 -- scripts/common.sh@368 -- $ return 0 00:33:32.576 11:44:13 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:32.576 11:44:13 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:33:32.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:32.576 --rc genhtml_branch_coverage=1 00:33:32.576 --rc genhtml_function_coverage=1 00:33:32.576 --rc genhtml_legend=1 00:33:32.576 --rc geninfo_all_blocks=1 00:33:32.576 --rc geninfo_unexecuted_blocks=1 00:33:32.576 00:33:32.576 ' 00:33:32.576 11:44:13 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:33:32.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:32.576 --rc genhtml_branch_coverage=1 00:33:32.576 --rc genhtml_function_coverage=1 00:33:32.576 --rc genhtml_legend=1 00:33:32.576 --rc geninfo_all_blocks=1 00:33:32.576 --rc geninfo_unexecuted_blocks=1 00:33:32.576 00:33:32.576 ' 00:33:32.576 11:44:13 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:33:32.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:32.576 --rc genhtml_branch_coverage=1 00:33:32.576 --rc genhtml_function_coverage=1 00:33:32.576 --rc genhtml_legend=1 00:33:32.576 --rc geninfo_all_blocks=1 00:33:32.576 --rc geninfo_unexecuted_blocks=1 00:33:32.576 00:33:32.576 ' 00:33:32.576 11:44:13 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:33:32.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:32.576 --rc genhtml_branch_coverage=1 00:33:32.576 --rc genhtml_function_coverage=1 00:33:32.576 --rc genhtml_legend=1 00:33:32.576 --rc geninfo_all_blocks=1 00:33:32.576 --rc geninfo_unexecuted_blocks=1 00:33:32.576 00:33:32.576 ' 00:33:32.576 11:44:13 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:32.576 11:44:13 -- scripts/common.sh@15 -- $ shopt -s extglob 00:33:32.576 11:44:13 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:33:32.576 11:44:13 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:32.576 11:44:13 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:32.576 11:44:13 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:32.576 11:44:13 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:32.576 11:44:13 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:32.576 11:44:13 -- paths/export.sh@5 -- $ export PATH 00:33:32.576 11:44:13 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:32.576 11:44:13 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:33:32.576 11:44:13 -- common/autobuild_common.sh@486 -- $ date +%s 00:33:32.576 11:44:13 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728301453.XXXXXX 00:33:32.576 11:44:13 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728301453.0U024A 00:33:32.576 11:44:13 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:33:32.576 11:44:13 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:33:32.576 11:44:13 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:33:32.576 11:44:13 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:33:32.576 11:44:13 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:33:32.576 11:44:13 -- common/autobuild_common.sh@502 -- $ get_config_params 00:33:32.576 11:44:13 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:33:32.576 11:44:13 -- common/autotest_common.sh@10 -- $ set +x 00:33:32.576 11:44:13 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:33:32.576 11:44:13 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:33:32.576 11:44:13 -- pm/common@17 -- $ local monitor 00:33:32.576 11:44:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:32.576 11:44:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:32.576 11:44:13 -- pm/common@25 -- $ sleep 1 00:33:32.576 11:44:13 -- pm/common@21 -- $ date +%s 00:33:32.576 11:44:13 -- pm/common@21 -- $ date +%s 00:33:32.576 11:44:13 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1728301453 00:33:32.576 11:44:13 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1728301453 00:33:32.576 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1728301453_collect-cpu-load.pm.log 00:33:32.576 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1728301453_collect-vmstat.pm.log 00:33:33.513 11:44:14 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:33:33.513 11:44:14 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:33:33.513 11:44:14 -- spdk/autopackage.sh@14 -- $ timing_finish 00:33:33.513 11:44:14 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:33:33.513 11:44:14 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:33:33.513 11:44:14 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:33:33.513 11:44:15 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:33:33.513 11:44:15 -- pm/common@29 -- $ signal_monitor_resources TERM 00:33:33.513 11:44:15 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:33:33.513 11:44:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:33.513 11:44:15 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:33:33.514 11:44:15 -- pm/common@44 -- $ pid=84030 00:33:33.514 11:44:15 -- pm/common@50 -- $ kill -TERM 84030 00:33:33.514 11:44:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:33.514 11:44:15 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:33:33.514 11:44:15 -- pm/common@44 -- $ pid=84032 00:33:33.514 11:44:15 -- pm/common@50 -- $ kill -TERM 84032 00:33:33.514 + [[ -n 5243 ]] 00:33:33.514 + sudo kill 5243 00:33:33.523 [Pipeline] } 00:33:33.539 [Pipeline] // timeout 00:33:33.543 [Pipeline] } 00:33:33.556 [Pipeline] // stage 00:33:33.560 [Pipeline] } 00:33:33.574 [Pipeline] // catchError 00:33:33.583 [Pipeline] stage 00:33:33.585 [Pipeline] { (Stop VM) 00:33:33.597 [Pipeline] sh 00:33:33.879 + vagrant halt 00:33:37.166 ==> default: Halting domain... 00:33:43.744 [Pipeline] sh 00:33:44.025 + vagrant destroy -f 00:33:47.311 ==> default: Removing domain... 00:33:47.582 [Pipeline] sh 00:33:47.897 + mv output /var/jenkins/workspace/nvme-vg-autotest_2/output 00:33:47.932 [Pipeline] } 00:33:47.945 [Pipeline] // stage 00:33:47.951 [Pipeline] } 00:33:47.964 [Pipeline] // dir 00:33:47.969 [Pipeline] } 00:33:47.983 [Pipeline] // wrap 00:33:47.989 [Pipeline] } 00:33:48.002 [Pipeline] // catchError 00:33:48.012 [Pipeline] stage 00:33:48.014 [Pipeline] { (Epilogue) 00:33:48.028 [Pipeline] sh 00:33:48.320 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:33:54.895 [Pipeline] catchError 00:33:54.897 [Pipeline] { 00:33:54.910 [Pipeline] sh 00:33:55.192 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:33:55.192 Artifacts sizes are good 00:33:55.202 [Pipeline] } 00:33:55.216 [Pipeline] // catchError 00:33:55.227 [Pipeline] archiveArtifacts 00:33:55.234 Archiving artifacts 00:33:55.360 [Pipeline] cleanWs 00:33:55.372 [WS-CLEANUP] Deleting project workspace... 00:33:55.372 [WS-CLEANUP] Deferred wipeout is used... 00:33:55.380 [WS-CLEANUP] done 00:33:55.382 [Pipeline] } 00:33:55.398 [Pipeline] // stage 00:33:55.405 [Pipeline] } 00:33:55.420 [Pipeline] // node 00:33:55.425 [Pipeline] End of Pipeline 00:33:55.456 Finished: SUCCESS