00:00:00.001 Started by upstream project "autotest-per-patch" build number 132500 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.150 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.151 The recommended git tool is: git 00:00:00.151 using credential 00000000-0000-0000-0000-000000000002 00:00:00.153 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.197 Fetching changes from the remote Git repository 00:00:00.199 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.242 Using shallow fetch with depth 1 00:00:00.243 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.243 > git --version # timeout=10 00:00:00.274 > git --version # 'git version 2.39.2' 00:00:00.274 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.288 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.288 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:09.553 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:09.567 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:09.580 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:09.580 > git config core.sparsecheckout # timeout=10 00:00:09.593 > git read-tree -mu HEAD # timeout=10 00:00:09.611 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:09.633 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:09.633 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:09.720 [Pipeline] Start of Pipeline 00:00:09.733 [Pipeline] library 00:00:09.734 Loading library shm_lib@master 00:00:09.734 Library shm_lib@master is cached. Copying from home. 00:00:09.746 [Pipeline] node 00:00:09.757 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest_2 00:00:09.759 [Pipeline] { 00:00:09.770 [Pipeline] catchError 00:00:09.772 [Pipeline] { 00:00:09.784 [Pipeline] wrap 00:00:09.791 [Pipeline] { 00:00:09.800 [Pipeline] stage 00:00:09.802 [Pipeline] { (Prologue) 00:00:09.816 [Pipeline] echo 00:00:09.818 Node: VM-host-SM38 00:00:09.823 [Pipeline] cleanWs 00:00:09.833 [WS-CLEANUP] Deleting project workspace... 00:00:09.833 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.840 [WS-CLEANUP] done 00:00:10.036 [Pipeline] setCustomBuildProperty 00:00:10.107 [Pipeline] httpRequest 00:00:10.455 [Pipeline] echo 00:00:10.456 Sorcerer 10.211.164.20 is alive 00:00:10.465 [Pipeline] retry 00:00:10.467 [Pipeline] { 00:00:10.480 [Pipeline] httpRequest 00:00:10.486 HttpMethod: GET 00:00:10.486 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.487 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.488 Response Code: HTTP/1.1 200 OK 00:00:10.489 Success: Status code 200 is in the accepted range: 200,404 00:00:10.489 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.344 [Pipeline] } 00:00:12.360 [Pipeline] // retry 00:00:12.368 [Pipeline] sh 00:00:12.653 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.669 [Pipeline] httpRequest 00:00:13.271 [Pipeline] echo 00:00:13.272 Sorcerer 10.211.164.20 is alive 00:00:13.283 [Pipeline] retry 00:00:13.285 [Pipeline] { 00:00:13.335 [Pipeline] httpRequest 00:00:13.339 HttpMethod: GET 00:00:13.339 URL: http://10.211.164.20/packages/spdk_393e80fcdeb035f7d797ac6862be711954125177.tar.gz 00:00:13.340 Sending request to url: http://10.211.164.20/packages/spdk_393e80fcdeb035f7d797ac6862be711954125177.tar.gz 00:00:13.358 Response Code: HTTP/1.1 200 OK 00:00:13.358 Success: Status code 200 is in the accepted range: 200,404 00:00:13.359 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/spdk_393e80fcdeb035f7d797ac6862be711954125177.tar.gz 00:00:43.176 [Pipeline] } 00:00:43.193 [Pipeline] // retry 00:00:43.200 [Pipeline] sh 00:00:43.485 + tar --no-same-owner -xf spdk_393e80fcdeb035f7d797ac6862be711954125177.tar.gz 00:00:46.030 [Pipeline] sh 00:00:46.304 + git -C spdk log --oneline -n5 00:00:46.304 393e80fcd util: add method for setting fd_group's wrapper 00:00:46.304 1e9cebf19 util: multi-level fd_group nesting 00:00:46.304 09301ca15 util: keep track of nested child fd_groups 00:00:46.304 557f022f6 bdev: Change 1st parameter of bdev_bytes_to_blocks from bdev to desc 00:00:46.304 c0b2ac5c9 bdev: Change void to bdev_io pointer of parameter of _bdev_io_submit() 00:00:46.322 [Pipeline] writeFile 00:00:46.337 [Pipeline] sh 00:00:46.622 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:46.635 [Pipeline] sh 00:00:46.917 + cat autorun-spdk.conf 00:00:46.917 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:46.917 SPDK_TEST_NVME=1 00:00:46.917 SPDK_TEST_FTL=1 00:00:46.917 SPDK_TEST_ISAL=1 00:00:46.917 SPDK_RUN_ASAN=1 00:00:46.917 SPDK_RUN_UBSAN=1 00:00:46.917 SPDK_TEST_XNVME=1 00:00:46.917 SPDK_TEST_NVME_FDP=1 00:00:46.917 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:46.923 RUN_NIGHTLY=0 00:00:46.925 [Pipeline] } 00:00:46.939 [Pipeline] // stage 00:00:46.955 [Pipeline] stage 00:00:46.958 [Pipeline] { (Run VM) 00:00:46.971 [Pipeline] sh 00:00:47.253 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:47.253 + echo 'Start stage prepare_nvme.sh' 00:00:47.253 Start stage prepare_nvme.sh 00:00:47.253 + [[ -n 10 ]] 00:00:47.253 + disk_prefix=ex10 00:00:47.253 + [[ -n /var/jenkins/workspace/nvme-vg-autotest_2 ]] 00:00:47.253 + [[ -e /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf ]] 00:00:47.253 + source /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf 00:00:47.253 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:47.253 ++ SPDK_TEST_NVME=1 00:00:47.253 ++ SPDK_TEST_FTL=1 00:00:47.253 ++ SPDK_TEST_ISAL=1 00:00:47.253 ++ SPDK_RUN_ASAN=1 00:00:47.253 ++ SPDK_RUN_UBSAN=1 00:00:47.253 ++ SPDK_TEST_XNVME=1 00:00:47.253 ++ SPDK_TEST_NVME_FDP=1 00:00:47.253 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:47.253 ++ RUN_NIGHTLY=0 00:00:47.253 + cd /var/jenkins/workspace/nvme-vg-autotest_2 00:00:47.253 + nvme_files=() 00:00:47.253 + declare -A nvme_files 00:00:47.253 + backend_dir=/var/lib/libvirt/images/backends 00:00:47.253 + nvme_files['nvme.img']=5G 00:00:47.253 + nvme_files['nvme-cmb.img']=5G 00:00:47.253 + nvme_files['nvme-multi0.img']=4G 00:00:47.253 + nvme_files['nvme-multi1.img']=4G 00:00:47.253 + nvme_files['nvme-multi2.img']=4G 00:00:47.253 + nvme_files['nvme-openstack.img']=8G 00:00:47.253 + nvme_files['nvme-zns.img']=5G 00:00:47.253 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:47.253 + (( SPDK_TEST_FTL == 1 )) 00:00:47.254 + nvme_files["nvme-ftl.img"]=6G 00:00:47.254 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:47.254 + nvme_files["nvme-fdp.img"]=1G 00:00:47.254 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:47.254 + for nvme in "${!nvme_files[@]}" 00:00:47.254 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-multi2.img -s 4G 00:00:47.254 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:47.254 + for nvme in "${!nvme_files[@]}" 00:00:47.254 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-ftl.img -s 6G 00:00:48.190 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:00:48.448 + for nvme in "${!nvme_files[@]}" 00:00:48.448 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-cmb.img -s 5G 00:00:48.448 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:48.448 + for nvme in "${!nvme_files[@]}" 00:00:48.448 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-openstack.img -s 8G 00:00:48.448 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:48.448 + for nvme in "${!nvme_files[@]}" 00:00:48.448 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-zns.img -s 5G 00:00:48.448 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:48.448 + for nvme in "${!nvme_files[@]}" 00:00:48.448 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-multi1.img -s 4G 00:00:48.707 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:48.707 + for nvme in "${!nvme_files[@]}" 00:00:48.707 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-multi0.img -s 4G 00:00:49.280 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:49.280 + for nvme in "${!nvme_files[@]}" 00:00:49.280 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-fdp.img -s 1G 00:00:49.280 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:00:49.280 + for nvme in "${!nvme_files[@]}" 00:00:49.280 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme.img -s 5G 00:00:49.873 Formatting '/var/lib/libvirt/images/backends/ex10-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:49.873 ++ sudo grep -rl ex10-nvme.img /etc/libvirt/qemu 00:00:49.873 + echo 'End stage prepare_nvme.sh' 00:00:49.873 End stage prepare_nvme.sh 00:00:49.885 [Pipeline] sh 00:00:50.171 + DISTRO=fedora39 00:00:50.171 + CPUS=10 00:00:50.171 + RAM=12288 00:00:50.171 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:50.171 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex10-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex10-nvme.img -b /var/lib/libvirt/images/backends/ex10-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex10-nvme-multi1.img:/var/lib/libvirt/images/backends/ex10-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex10-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:00:50.171 00:00:50.171 DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant 00:00:50.171 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk 00:00:50.171 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest_2 00:00:50.171 HELP=0 00:00:50.171 DRY_RUN=0 00:00:50.171 NVME_FILE=/var/lib/libvirt/images/backends/ex10-nvme-ftl.img,/var/lib/libvirt/images/backends/ex10-nvme.img,/var/lib/libvirt/images/backends/ex10-nvme-multi0.img,/var/lib/libvirt/images/backends/ex10-nvme-fdp.img, 00:00:50.171 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:00:50.171 NVME_AUTO_CREATE=0 00:00:50.171 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex10-nvme-multi1.img:/var/lib/libvirt/images/backends/ex10-nvme-multi2.img,, 00:00:50.171 NVME_CMB=,,,, 00:00:50.171 NVME_PMR=,,,, 00:00:50.171 NVME_ZNS=,,,, 00:00:50.171 NVME_MS=true,,,, 00:00:50.171 NVME_FDP=,,,on, 00:00:50.171 SPDK_VAGRANT_DISTRO=fedora39 00:00:50.171 SPDK_VAGRANT_VMCPU=10 00:00:50.171 SPDK_VAGRANT_VMRAM=12288 00:00:50.171 SPDK_VAGRANT_PROVIDER=libvirt 00:00:50.171 SPDK_VAGRANT_HTTP_PROXY= 00:00:50.171 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:50.171 SPDK_OPENSTACK_NETWORK=0 00:00:50.171 VAGRANT_PACKAGE_BOX=0 00:00:50.171 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:00:50.171 FORCE_DISTRO=true 00:00:50.171 VAGRANT_BOX_VERSION= 00:00:50.171 EXTRA_VAGRANTFILES= 00:00:50.171 NIC_MODEL=e1000 00:00:50.171 00:00:50.171 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt' 00:00:50.171 /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest_2 00:00:52.716 Bringing machine 'default' up with 'libvirt' provider... 00:00:52.976 ==> default: Creating image (snapshot of base box volume). 00:00:52.976 ==> default: Creating domain with the following settings... 00:00:52.976 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732535933_497e5b5fae67fb13af54 00:00:52.976 ==> default: -- Domain type: kvm 00:00:52.976 ==> default: -- Cpus: 10 00:00:52.976 ==> default: -- Feature: acpi 00:00:52.976 ==> default: -- Feature: apic 00:00:52.976 ==> default: -- Feature: pae 00:00:53.236 ==> default: -- Memory: 12288M 00:00:53.236 ==> default: -- Memory Backing: hugepages: 00:00:53.236 ==> default: -- Management MAC: 00:00:53.236 ==> default: -- Loader: 00:00:53.236 ==> default: -- Nvram: 00:00:53.236 ==> default: -- Base box: spdk/fedora39 00:00:53.236 ==> default: -- Storage pool: default 00:00:53.236 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732535933_497e5b5fae67fb13af54.img (20G) 00:00:53.236 ==> default: -- Volume Cache: default 00:00:53.236 ==> default: -- Kernel: 00:00:53.236 ==> default: -- Initrd: 00:00:53.236 ==> default: -- Graphics Type: vnc 00:00:53.236 ==> default: -- Graphics Port: -1 00:00:53.236 ==> default: -- Graphics IP: 127.0.0.1 00:00:53.236 ==> default: -- Graphics Password: Not defined 00:00:53.236 ==> default: -- Video Type: cirrus 00:00:53.236 ==> default: -- Video VRAM: 9216 00:00:53.236 ==> default: -- Sound Type: 00:00:53.236 ==> default: -- Keymap: en-us 00:00:53.236 ==> default: -- TPM Path: 00:00:53.236 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:53.236 ==> default: -- Command line args: 00:00:53.236 ==> default: -> value=-device, 00:00:53.236 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:53.236 ==> default: -> value=-drive, 00:00:53.236 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:00:53.236 ==> default: -> value=-device, 00:00:53.236 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:00:53.236 ==> default: -> value=-device, 00:00:53.236 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:53.236 ==> default: -> value=-drive, 00:00:53.236 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme.img,if=none,id=nvme-1-drive0, 00:00:53.236 ==> default: -> value=-device, 00:00:53.236 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:53.236 ==> default: -> value=-device, 00:00:53.236 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:00:53.236 ==> default: -> value=-drive, 00:00:53.236 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:00:53.236 ==> default: -> value=-device, 00:00:53.236 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:53.236 ==> default: -> value=-drive, 00:00:53.236 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:00:53.236 ==> default: -> value=-device, 00:00:53.236 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:53.236 ==> default: -> value=-drive, 00:00:53.236 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:00:53.236 ==> default: -> value=-device, 00:00:53.236 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:53.236 ==> default: -> value=-device, 00:00:53.237 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:00:53.237 ==> default: -> value=-device, 00:00:53.237 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:00:53.237 ==> default: -> value=-drive, 00:00:53.237 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:00:53.237 ==> default: -> value=-device, 00:00:53.237 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:53.237 ==> default: Creating shared folders metadata... 00:00:53.237 ==> default: Starting domain. 00:00:54.617 ==> default: Waiting for domain to get an IP address... 00:01:12.791 ==> default: Waiting for SSH to become available... 00:01:12.791 ==> default: Configuring and enabling network interfaces... 00:01:18.116 default: SSH address: 192.168.121.23:22 00:01:18.116 default: SSH username: vagrant 00:01:18.116 default: SSH auth method: private key 00:01:19.514 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:27.695 ==> default: Mounting SSHFS shared folder... 00:01:29.610 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:29.610 ==> default: Checking Mount.. 00:01:30.998 ==> default: Folder Successfully Mounted! 00:01:30.998 00:01:30.998 SUCCESS! 00:01:30.998 00:01:30.998 cd to /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt and type "vagrant ssh" to use. 00:01:30.998 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:30.998 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt" to destroy all trace of vm. 00:01:30.998 00:01:31.009 [Pipeline] } 00:01:31.025 [Pipeline] // stage 00:01:31.034 [Pipeline] dir 00:01:31.035 Running in /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt 00:01:31.037 [Pipeline] { 00:01:31.049 [Pipeline] catchError 00:01:31.052 [Pipeline] { 00:01:31.065 [Pipeline] sh 00:01:31.350 + vagrant ssh-config --host vagrant 00:01:31.351 + sed -ne '/^Host/,$p' 00:01:31.351 + tee ssh_conf 00:01:34.713 Host vagrant 00:01:34.714 HostName 192.168.121.23 00:01:34.714 User vagrant 00:01:34.714 Port 22 00:01:34.714 UserKnownHostsFile /dev/null 00:01:34.714 StrictHostKeyChecking no 00:01:34.714 PasswordAuthentication no 00:01:34.714 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:34.714 IdentitiesOnly yes 00:01:34.714 LogLevel FATAL 00:01:34.714 ForwardAgent yes 00:01:34.714 ForwardX11 yes 00:01:34.714 00:01:34.731 [Pipeline] withEnv 00:01:34.733 [Pipeline] { 00:01:34.747 [Pipeline] sh 00:01:35.033 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:01:35.033 source /etc/os-release 00:01:35.033 [[ -e /image.version ]] && img=$(< /image.version) 00:01:35.033 # Minimal, systemd-like check. 00:01:35.033 if [[ -e /.dockerenv ]]; then 00:01:35.033 # Clear garbage from the node'\''s name: 00:01:35.033 # agt-er_autotest_547-896 -> autotest_547-896 00:01:35.033 # $HOSTNAME is the actual container id 00:01:35.033 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:35.033 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:35.033 # We can assume this is a mount from a host where container is running, 00:01:35.033 # so fetch its hostname to easily identify the target swarm worker. 00:01:35.033 container="$(< /etc/hostname) ($agent)" 00:01:35.033 else 00:01:35.033 # Fallback 00:01:35.033 container=$agent 00:01:35.033 fi 00:01:35.033 fi 00:01:35.033 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:35.033 ' 00:01:35.306 [Pipeline] } 00:01:35.321 [Pipeline] // withEnv 00:01:35.329 [Pipeline] setCustomBuildProperty 00:01:35.345 [Pipeline] stage 00:01:35.347 [Pipeline] { (Tests) 00:01:35.363 [Pipeline] sh 00:01:35.644 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:35.658 [Pipeline] sh 00:01:35.938 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:35.953 [Pipeline] timeout 00:01:35.953 Timeout set to expire in 50 min 00:01:35.955 [Pipeline] { 00:01:35.967 [Pipeline] sh 00:01:36.250 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:01:36.821 HEAD is now at 393e80fcd util: add method for setting fd_group's wrapper 00:01:36.834 [Pipeline] sh 00:01:37.120 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:01:37.397 [Pipeline] sh 00:01:37.683 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:37.961 [Pipeline] sh 00:01:38.257 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:01:38.523 ++ readlink -f spdk_repo 00:01:38.523 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:38.523 + [[ -n /home/vagrant/spdk_repo ]] 00:01:38.523 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:38.523 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:38.523 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:38.523 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:38.523 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:38.523 + [[ nvme-vg-autotest == pkgdep-* ]] 00:01:38.523 + cd /home/vagrant/spdk_repo 00:01:38.523 + source /etc/os-release 00:01:38.523 ++ NAME='Fedora Linux' 00:01:38.523 ++ VERSION='39 (Cloud Edition)' 00:01:38.523 ++ ID=fedora 00:01:38.523 ++ VERSION_ID=39 00:01:38.523 ++ VERSION_CODENAME= 00:01:38.523 ++ PLATFORM_ID=platform:f39 00:01:38.523 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:38.523 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:38.523 ++ LOGO=fedora-logo-icon 00:01:38.523 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:38.523 ++ HOME_URL=https://fedoraproject.org/ 00:01:38.523 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:38.523 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:38.523 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:38.523 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:38.523 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:38.523 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:38.523 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:38.523 ++ SUPPORT_END=2024-11-12 00:01:38.523 ++ VARIANT='Cloud Edition' 00:01:38.523 ++ VARIANT_ID=cloud 00:01:38.523 + uname -a 00:01:38.523 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:38.523 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:38.785 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:39.152 Hugepages 00:01:39.152 node hugesize free / total 00:01:39.152 node0 1048576kB 0 / 0 00:01:39.152 node0 2048kB 0 / 0 00:01:39.152 00:01:39.152 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:39.152 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:39.152 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:39.152 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:01:39.152 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:01:39.152 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:01:39.152 + rm -f /tmp/spdk-ld-path 00:01:39.152 + source autorun-spdk.conf 00:01:39.152 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:39.152 ++ SPDK_TEST_NVME=1 00:01:39.152 ++ SPDK_TEST_FTL=1 00:01:39.152 ++ SPDK_TEST_ISAL=1 00:01:39.152 ++ SPDK_RUN_ASAN=1 00:01:39.152 ++ SPDK_RUN_UBSAN=1 00:01:39.152 ++ SPDK_TEST_XNVME=1 00:01:39.152 ++ SPDK_TEST_NVME_FDP=1 00:01:39.152 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:39.152 ++ RUN_NIGHTLY=0 00:01:39.152 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:39.152 + [[ -n '' ]] 00:01:39.152 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:39.152 + for M in /var/spdk/build-*-manifest.txt 00:01:39.152 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:39.152 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:39.415 + for M in /var/spdk/build-*-manifest.txt 00:01:39.415 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:39.415 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:39.415 + for M in /var/spdk/build-*-manifest.txt 00:01:39.415 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:39.415 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:39.415 ++ uname 00:01:39.415 + [[ Linux == \L\i\n\u\x ]] 00:01:39.415 + sudo dmesg -T 00:01:39.415 + sudo dmesg --clear 00:01:39.415 + dmesg_pid=5022 00:01:39.415 + [[ Fedora Linux == FreeBSD ]] 00:01:39.415 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:39.415 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:39.415 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:39.415 + [[ -x /usr/src/fio-static/fio ]] 00:01:39.415 + sudo dmesg -Tw 00:01:39.415 + export FIO_BIN=/usr/src/fio-static/fio 00:01:39.415 + FIO_BIN=/usr/src/fio-static/fio 00:01:39.415 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:39.415 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:39.415 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:39.415 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:39.415 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:39.415 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:39.415 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:39.415 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:39.415 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:39.415 11:59:40 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:39.415 11:59:40 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:39.415 11:59:40 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:39.415 11:59:40 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:01:39.415 11:59:40 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:01:39.415 11:59:40 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:01:39.415 11:59:40 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:01:39.415 11:59:40 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:01:39.415 11:59:40 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:01:39.415 11:59:40 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:01:39.415 11:59:40 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:39.415 11:59:40 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:01:39.415 11:59:40 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:39.415 11:59:40 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:39.415 11:59:40 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:39.415 11:59:40 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:39.415 11:59:40 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:39.415 11:59:40 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:39.415 11:59:40 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:39.415 11:59:40 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:39.415 11:59:40 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:39.415 11:59:40 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:39.415 11:59:40 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:39.415 11:59:40 -- paths/export.sh@5 -- $ export PATH 00:01:39.415 11:59:40 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:39.415 11:59:40 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:39.415 11:59:40 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:39.415 11:59:40 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732535980.XXXXXX 00:01:39.415 11:59:40 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732535980.zgRQaI 00:01:39.415 11:59:40 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:39.415 11:59:40 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:39.415 11:59:40 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:39.415 11:59:40 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:39.415 11:59:40 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:39.415 11:59:40 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:39.415 11:59:40 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:39.415 11:59:40 -- common/autotest_common.sh@10 -- $ set +x 00:01:39.415 11:59:40 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:01:39.415 11:59:40 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:39.676 11:59:40 -- pm/common@17 -- $ local monitor 00:01:39.676 11:59:40 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:39.676 11:59:40 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:39.676 11:59:40 -- pm/common@25 -- $ sleep 1 00:01:39.676 11:59:40 -- pm/common@21 -- $ date +%s 00:01:39.676 11:59:40 -- pm/common@21 -- $ date +%s 00:01:39.677 11:59:40 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732535980 00:01:39.677 11:59:40 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732535980 00:01:39.677 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732535980_collect-cpu-load.pm.log 00:01:39.677 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732535980_collect-vmstat.pm.log 00:01:40.621 11:59:41 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:40.621 11:59:41 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:40.621 11:59:41 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:40.621 11:59:41 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:40.621 11:59:41 -- spdk/autobuild.sh@16 -- $ date -u 00:01:40.621 Mon Nov 25 11:59:41 AM UTC 2024 00:01:40.621 11:59:41 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:40.621 v25.01-pre-222-g393e80fcd 00:01:40.621 11:59:41 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:40.621 11:59:41 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:40.621 11:59:41 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:40.621 11:59:41 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:40.621 11:59:41 -- common/autotest_common.sh@10 -- $ set +x 00:01:40.621 ************************************ 00:01:40.621 START TEST asan 00:01:40.621 ************************************ 00:01:40.621 using asan 00:01:40.621 11:59:41 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:40.621 ************************************ 00:01:40.621 END TEST asan 00:01:40.621 ************************************ 00:01:40.621 00:01:40.621 real 0m0.000s 00:01:40.621 user 0m0.000s 00:01:40.621 sys 0m0.000s 00:01:40.621 11:59:41 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:40.621 11:59:41 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:40.621 11:59:41 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:40.621 11:59:41 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:40.621 11:59:41 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:40.621 11:59:41 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:40.621 11:59:41 -- common/autotest_common.sh@10 -- $ set +x 00:01:40.621 ************************************ 00:01:40.621 START TEST ubsan 00:01:40.621 ************************************ 00:01:40.621 using ubsan 00:01:40.621 11:59:41 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:40.621 00:01:40.621 real 0m0.000s 00:01:40.621 user 0m0.000s 00:01:40.621 sys 0m0.000s 00:01:40.621 ************************************ 00:01:40.621 END TEST ubsan 00:01:40.621 ************************************ 00:01:40.621 11:59:41 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:40.621 11:59:41 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:40.621 11:59:41 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:40.621 11:59:41 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:40.621 11:59:41 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:40.621 11:59:41 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:40.621 11:59:41 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:40.621 11:59:41 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:40.621 11:59:41 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:40.621 11:59:41 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:40.621 11:59:41 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:01:40.882 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:40.882 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:41.143 Using 'verbs' RDMA provider 00:01:54.356 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:04.373 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:04.373 Creating mk/config.mk...done. 00:02:04.373 Creating mk/cc.flags.mk...done. 00:02:04.373 Type 'make' to build. 00:02:04.373 12:00:04 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:04.373 12:00:04 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:04.373 12:00:04 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:04.373 12:00:04 -- common/autotest_common.sh@10 -- $ set +x 00:02:04.373 ************************************ 00:02:04.373 START TEST make 00:02:04.373 ************************************ 00:02:04.373 12:00:04 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:04.373 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:04.373 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:04.373 meson setup builddir \ 00:02:04.373 -Dwith-libaio=enabled \ 00:02:04.373 -Dwith-liburing=enabled \ 00:02:04.373 -Dwith-libvfn=disabled \ 00:02:04.373 -Dwith-spdk=disabled \ 00:02:04.373 -Dexamples=false \ 00:02:04.373 -Dtests=false \ 00:02:04.373 -Dtools=false && \ 00:02:04.373 meson compile -C builddir && \ 00:02:04.373 cd -) 00:02:04.373 make[1]: Nothing to be done for 'all'. 00:02:06.937 The Meson build system 00:02:06.937 Version: 1.5.0 00:02:06.937 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:06.937 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:06.937 Build type: native build 00:02:06.937 Project name: xnvme 00:02:06.937 Project version: 0.7.5 00:02:06.937 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:06.937 C linker for the host machine: cc ld.bfd 2.40-14 00:02:06.937 Host machine cpu family: x86_64 00:02:06.937 Host machine cpu: x86_64 00:02:06.937 Message: host_machine.system: linux 00:02:06.937 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:06.937 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:06.937 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:06.937 Run-time dependency threads found: YES 00:02:06.937 Has header "setupapi.h" : NO 00:02:06.937 Has header "linux/blkzoned.h" : YES 00:02:06.937 Has header "linux/blkzoned.h" : YES (cached) 00:02:06.937 Has header "libaio.h" : YES 00:02:06.937 Library aio found: YES 00:02:06.937 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:06.937 Run-time dependency liburing found: YES 2.2 00:02:06.937 Dependency libvfn skipped: feature with-libvfn disabled 00:02:06.937 Found CMake: /usr/bin/cmake (3.27.7) 00:02:06.937 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:02:06.937 Subproject spdk : skipped: feature with-spdk disabled 00:02:06.937 Run-time dependency appleframeworks found: NO (tried framework) 00:02:06.937 Run-time dependency appleframeworks found: NO (tried framework) 00:02:06.937 Library rt found: YES 00:02:06.937 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:06.937 Configuring xnvme_config.h using configuration 00:02:06.937 Configuring xnvme.spec using configuration 00:02:06.937 Run-time dependency bash-completion found: YES 2.11 00:02:06.937 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:06.937 Program cp found: YES (/usr/bin/cp) 00:02:06.937 Build targets in project: 3 00:02:06.937 00:02:06.937 xnvme 0.7.5 00:02:06.937 00:02:06.937 Subprojects 00:02:06.937 spdk : NO Feature 'with-spdk' disabled 00:02:06.937 00:02:06.937 User defined options 00:02:06.937 examples : false 00:02:06.937 tests : false 00:02:06.937 tools : false 00:02:06.937 with-libaio : enabled 00:02:06.937 with-liburing: enabled 00:02:06.937 with-libvfn : disabled 00:02:06.937 with-spdk : disabled 00:02:06.937 00:02:06.937 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:06.937 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:06.937 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:02:06.937 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:02:06.937 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:02:06.937 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:02:06.937 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:02:06.937 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:02:06.937 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:02:06.937 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:02:06.937 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:02:06.937 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:02:06.937 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:02:06.937 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:02:06.937 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:02:06.937 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:02:06.937 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:02:06.937 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:02:07.198 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:02:07.198 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:02:07.198 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:02:07.198 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:02:07.198 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:02:07.199 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:02:07.199 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:02:07.199 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:02:07.199 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:02:07.199 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:02:07.199 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:02:07.199 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:02:07.199 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:02:07.199 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:02:07.199 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:02:07.199 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:02:07.199 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:02:07.199 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:02:07.199 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:02:07.199 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:02:07.199 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:02:07.199 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:02:07.199 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:02:07.199 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:02:07.199 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:02:07.199 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:02:07.199 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:02:07.199 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:02:07.199 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:02:07.199 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:02:07.199 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:02:07.199 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:02:07.199 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:02:07.199 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:02:07.199 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:02:07.199 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:02:07.199 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:02:07.460 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:02:07.460 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:02:07.460 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:02:07.460 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:02:07.460 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:02:07.460 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:02:07.460 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:02:07.460 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:02:07.460 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:02:07.460 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:02:07.460 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:02:07.460 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:02:07.460 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:02:07.460 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:02:07.460 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:02:07.460 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:02:07.460 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:02:07.460 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:02:07.720 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:02:07.720 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:02:07.720 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:02:07.720 [75/76] Linking static target lib/libxnvme.a 00:02:07.720 [76/76] Linking target lib/libxnvme.so.0.7.5 00:02:07.720 INFO: autodetecting backend as ninja 00:02:07.720 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:07.979 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:14.554 The Meson build system 00:02:14.554 Version: 1.5.0 00:02:14.554 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:14.554 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:14.554 Build type: native build 00:02:14.554 Program cat found: YES (/usr/bin/cat) 00:02:14.554 Project name: DPDK 00:02:14.554 Project version: 24.03.0 00:02:14.555 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:14.555 C linker for the host machine: cc ld.bfd 2.40-14 00:02:14.555 Host machine cpu family: x86_64 00:02:14.555 Host machine cpu: x86_64 00:02:14.555 Message: ## Building in Developer Mode ## 00:02:14.555 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:14.555 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:14.555 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:14.555 Program python3 found: YES (/usr/bin/python3) 00:02:14.555 Program cat found: YES (/usr/bin/cat) 00:02:14.555 Compiler for C supports arguments -march=native: YES 00:02:14.555 Checking for size of "void *" : 8 00:02:14.555 Checking for size of "void *" : 8 (cached) 00:02:14.555 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:14.555 Library m found: YES 00:02:14.555 Library numa found: YES 00:02:14.555 Has header "numaif.h" : YES 00:02:14.555 Library fdt found: NO 00:02:14.555 Library execinfo found: NO 00:02:14.555 Has header "execinfo.h" : YES 00:02:14.555 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:14.555 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:14.555 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:14.555 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:14.555 Run-time dependency openssl found: YES 3.1.1 00:02:14.555 Run-time dependency libpcap found: YES 1.10.4 00:02:14.555 Has header "pcap.h" with dependency libpcap: YES 00:02:14.555 Compiler for C supports arguments -Wcast-qual: YES 00:02:14.555 Compiler for C supports arguments -Wdeprecated: YES 00:02:14.555 Compiler for C supports arguments -Wformat: YES 00:02:14.555 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:14.555 Compiler for C supports arguments -Wformat-security: NO 00:02:14.555 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:14.555 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:14.555 Compiler for C supports arguments -Wnested-externs: YES 00:02:14.555 Compiler for C supports arguments -Wold-style-definition: YES 00:02:14.555 Compiler for C supports arguments -Wpointer-arith: YES 00:02:14.555 Compiler for C supports arguments -Wsign-compare: YES 00:02:14.555 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:14.555 Compiler for C supports arguments -Wundef: YES 00:02:14.555 Compiler for C supports arguments -Wwrite-strings: YES 00:02:14.555 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:14.555 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:14.555 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:14.555 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:14.555 Program objdump found: YES (/usr/bin/objdump) 00:02:14.555 Compiler for C supports arguments -mavx512f: YES 00:02:14.555 Checking if "AVX512 checking" compiles: YES 00:02:14.555 Fetching value of define "__SSE4_2__" : 1 00:02:14.555 Fetching value of define "__AES__" : 1 00:02:14.555 Fetching value of define "__AVX__" : 1 00:02:14.555 Fetching value of define "__AVX2__" : 1 00:02:14.555 Fetching value of define "__AVX512BW__" : 1 00:02:14.555 Fetching value of define "__AVX512CD__" : 1 00:02:14.555 Fetching value of define "__AVX512DQ__" : 1 00:02:14.555 Fetching value of define "__AVX512F__" : 1 00:02:14.555 Fetching value of define "__AVX512VL__" : 1 00:02:14.555 Fetching value of define "__PCLMUL__" : 1 00:02:14.555 Fetching value of define "__RDRND__" : 1 00:02:14.555 Fetching value of define "__RDSEED__" : 1 00:02:14.555 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:14.555 Fetching value of define "__znver1__" : (undefined) 00:02:14.555 Fetching value of define "__znver2__" : (undefined) 00:02:14.555 Fetching value of define "__znver3__" : (undefined) 00:02:14.555 Fetching value of define "__znver4__" : (undefined) 00:02:14.555 Library asan found: YES 00:02:14.555 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:14.555 Message: lib/log: Defining dependency "log" 00:02:14.555 Message: lib/kvargs: Defining dependency "kvargs" 00:02:14.555 Message: lib/telemetry: Defining dependency "telemetry" 00:02:14.555 Library rt found: YES 00:02:14.555 Checking for function "getentropy" : NO 00:02:14.555 Message: lib/eal: Defining dependency "eal" 00:02:14.555 Message: lib/ring: Defining dependency "ring" 00:02:14.555 Message: lib/rcu: Defining dependency "rcu" 00:02:14.555 Message: lib/mempool: Defining dependency "mempool" 00:02:14.555 Message: lib/mbuf: Defining dependency "mbuf" 00:02:14.555 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:14.555 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:14.555 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:14.555 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:14.555 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:14.555 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:14.555 Compiler for C supports arguments -mpclmul: YES 00:02:14.555 Compiler for C supports arguments -maes: YES 00:02:14.555 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:14.555 Compiler for C supports arguments -mavx512bw: YES 00:02:14.555 Compiler for C supports arguments -mavx512dq: YES 00:02:14.555 Compiler for C supports arguments -mavx512vl: YES 00:02:14.555 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:14.555 Compiler for C supports arguments -mavx2: YES 00:02:14.555 Compiler for C supports arguments -mavx: YES 00:02:14.555 Message: lib/net: Defining dependency "net" 00:02:14.555 Message: lib/meter: Defining dependency "meter" 00:02:14.555 Message: lib/ethdev: Defining dependency "ethdev" 00:02:14.555 Message: lib/pci: Defining dependency "pci" 00:02:14.555 Message: lib/cmdline: Defining dependency "cmdline" 00:02:14.555 Message: lib/hash: Defining dependency "hash" 00:02:14.555 Message: lib/timer: Defining dependency "timer" 00:02:14.555 Message: lib/compressdev: Defining dependency "compressdev" 00:02:14.555 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:14.555 Message: lib/dmadev: Defining dependency "dmadev" 00:02:14.555 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:14.555 Message: lib/power: Defining dependency "power" 00:02:14.555 Message: lib/reorder: Defining dependency "reorder" 00:02:14.555 Message: lib/security: Defining dependency "security" 00:02:14.555 Has header "linux/userfaultfd.h" : YES 00:02:14.555 Has header "linux/vduse.h" : YES 00:02:14.555 Message: lib/vhost: Defining dependency "vhost" 00:02:14.555 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:14.555 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:14.555 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:14.555 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:14.555 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:14.555 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:14.555 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:14.555 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:14.555 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:14.555 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:14.555 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:14.555 Configuring doxy-api-html.conf using configuration 00:02:14.555 Configuring doxy-api-man.conf using configuration 00:02:14.555 Program mandb found: YES (/usr/bin/mandb) 00:02:14.555 Program sphinx-build found: NO 00:02:14.555 Configuring rte_build_config.h using configuration 00:02:14.555 Message: 00:02:14.555 ================= 00:02:14.555 Applications Enabled 00:02:14.555 ================= 00:02:14.555 00:02:14.555 apps: 00:02:14.555 00:02:14.555 00:02:14.555 Message: 00:02:14.555 ================= 00:02:14.555 Libraries Enabled 00:02:14.555 ================= 00:02:14.555 00:02:14.555 libs: 00:02:14.555 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:14.555 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:14.555 cryptodev, dmadev, power, reorder, security, vhost, 00:02:14.555 00:02:14.555 Message: 00:02:14.555 =============== 00:02:14.555 Drivers Enabled 00:02:14.555 =============== 00:02:14.555 00:02:14.555 common: 00:02:14.555 00:02:14.555 bus: 00:02:14.555 pci, vdev, 00:02:14.555 mempool: 00:02:14.555 ring, 00:02:14.555 dma: 00:02:14.555 00:02:14.555 net: 00:02:14.555 00:02:14.555 crypto: 00:02:14.555 00:02:14.555 compress: 00:02:14.555 00:02:14.555 vdpa: 00:02:14.555 00:02:14.555 00:02:14.555 Message: 00:02:14.555 ================= 00:02:14.555 Content Skipped 00:02:14.555 ================= 00:02:14.555 00:02:14.555 apps: 00:02:14.555 dumpcap: explicitly disabled via build config 00:02:14.555 graph: explicitly disabled via build config 00:02:14.555 pdump: explicitly disabled via build config 00:02:14.555 proc-info: explicitly disabled via build config 00:02:14.555 test-acl: explicitly disabled via build config 00:02:14.555 test-bbdev: explicitly disabled via build config 00:02:14.555 test-cmdline: explicitly disabled via build config 00:02:14.555 test-compress-perf: explicitly disabled via build config 00:02:14.555 test-crypto-perf: explicitly disabled via build config 00:02:14.555 test-dma-perf: explicitly disabled via build config 00:02:14.555 test-eventdev: explicitly disabled via build config 00:02:14.555 test-fib: explicitly disabled via build config 00:02:14.555 test-flow-perf: explicitly disabled via build config 00:02:14.555 test-gpudev: explicitly disabled via build config 00:02:14.555 test-mldev: explicitly disabled via build config 00:02:14.555 test-pipeline: explicitly disabled via build config 00:02:14.555 test-pmd: explicitly disabled via build config 00:02:14.555 test-regex: explicitly disabled via build config 00:02:14.555 test-sad: explicitly disabled via build config 00:02:14.555 test-security-perf: explicitly disabled via build config 00:02:14.555 00:02:14.555 libs: 00:02:14.555 argparse: explicitly disabled via build config 00:02:14.555 metrics: explicitly disabled via build config 00:02:14.556 acl: explicitly disabled via build config 00:02:14.556 bbdev: explicitly disabled via build config 00:02:14.556 bitratestats: explicitly disabled via build config 00:02:14.556 bpf: explicitly disabled via build config 00:02:14.556 cfgfile: explicitly disabled via build config 00:02:14.556 distributor: explicitly disabled via build config 00:02:14.556 efd: explicitly disabled via build config 00:02:14.556 eventdev: explicitly disabled via build config 00:02:14.556 dispatcher: explicitly disabled via build config 00:02:14.556 gpudev: explicitly disabled via build config 00:02:14.556 gro: explicitly disabled via build config 00:02:14.556 gso: explicitly disabled via build config 00:02:14.556 ip_frag: explicitly disabled via build config 00:02:14.556 jobstats: explicitly disabled via build config 00:02:14.556 latencystats: explicitly disabled via build config 00:02:14.556 lpm: explicitly disabled via build config 00:02:14.556 member: explicitly disabled via build config 00:02:14.556 pcapng: explicitly disabled via build config 00:02:14.556 rawdev: explicitly disabled via build config 00:02:14.556 regexdev: explicitly disabled via build config 00:02:14.556 mldev: explicitly disabled via build config 00:02:14.556 rib: explicitly disabled via build config 00:02:14.556 sched: explicitly disabled via build config 00:02:14.556 stack: explicitly disabled via build config 00:02:14.556 ipsec: explicitly disabled via build config 00:02:14.556 pdcp: explicitly disabled via build config 00:02:14.556 fib: explicitly disabled via build config 00:02:14.556 port: explicitly disabled via build config 00:02:14.556 pdump: explicitly disabled via build config 00:02:14.556 table: explicitly disabled via build config 00:02:14.556 pipeline: explicitly disabled via build config 00:02:14.556 graph: explicitly disabled via build config 00:02:14.556 node: explicitly disabled via build config 00:02:14.556 00:02:14.556 drivers: 00:02:14.556 common/cpt: not in enabled drivers build config 00:02:14.556 common/dpaax: not in enabled drivers build config 00:02:14.556 common/iavf: not in enabled drivers build config 00:02:14.556 common/idpf: not in enabled drivers build config 00:02:14.556 common/ionic: not in enabled drivers build config 00:02:14.556 common/mvep: not in enabled drivers build config 00:02:14.556 common/octeontx: not in enabled drivers build config 00:02:14.556 bus/auxiliary: not in enabled drivers build config 00:02:14.556 bus/cdx: not in enabled drivers build config 00:02:14.556 bus/dpaa: not in enabled drivers build config 00:02:14.556 bus/fslmc: not in enabled drivers build config 00:02:14.556 bus/ifpga: not in enabled drivers build config 00:02:14.556 bus/platform: not in enabled drivers build config 00:02:14.556 bus/uacce: not in enabled drivers build config 00:02:14.556 bus/vmbus: not in enabled drivers build config 00:02:14.556 common/cnxk: not in enabled drivers build config 00:02:14.556 common/mlx5: not in enabled drivers build config 00:02:14.556 common/nfp: not in enabled drivers build config 00:02:14.556 common/nitrox: not in enabled drivers build config 00:02:14.556 common/qat: not in enabled drivers build config 00:02:14.556 common/sfc_efx: not in enabled drivers build config 00:02:14.556 mempool/bucket: not in enabled drivers build config 00:02:14.556 mempool/cnxk: not in enabled drivers build config 00:02:14.556 mempool/dpaa: not in enabled drivers build config 00:02:14.556 mempool/dpaa2: not in enabled drivers build config 00:02:14.556 mempool/octeontx: not in enabled drivers build config 00:02:14.556 mempool/stack: not in enabled drivers build config 00:02:14.556 dma/cnxk: not in enabled drivers build config 00:02:14.556 dma/dpaa: not in enabled drivers build config 00:02:14.556 dma/dpaa2: not in enabled drivers build config 00:02:14.556 dma/hisilicon: not in enabled drivers build config 00:02:14.556 dma/idxd: not in enabled drivers build config 00:02:14.556 dma/ioat: not in enabled drivers build config 00:02:14.556 dma/skeleton: not in enabled drivers build config 00:02:14.556 net/af_packet: not in enabled drivers build config 00:02:14.556 net/af_xdp: not in enabled drivers build config 00:02:14.556 net/ark: not in enabled drivers build config 00:02:14.556 net/atlantic: not in enabled drivers build config 00:02:14.556 net/avp: not in enabled drivers build config 00:02:14.556 net/axgbe: not in enabled drivers build config 00:02:14.556 net/bnx2x: not in enabled drivers build config 00:02:14.556 net/bnxt: not in enabled drivers build config 00:02:14.556 net/bonding: not in enabled drivers build config 00:02:14.556 net/cnxk: not in enabled drivers build config 00:02:14.556 net/cpfl: not in enabled drivers build config 00:02:14.556 net/cxgbe: not in enabled drivers build config 00:02:14.556 net/dpaa: not in enabled drivers build config 00:02:14.556 net/dpaa2: not in enabled drivers build config 00:02:14.556 net/e1000: not in enabled drivers build config 00:02:14.556 net/ena: not in enabled drivers build config 00:02:14.556 net/enetc: not in enabled drivers build config 00:02:14.556 net/enetfec: not in enabled drivers build config 00:02:14.556 net/enic: not in enabled drivers build config 00:02:14.556 net/failsafe: not in enabled drivers build config 00:02:14.556 net/fm10k: not in enabled drivers build config 00:02:14.556 net/gve: not in enabled drivers build config 00:02:14.556 net/hinic: not in enabled drivers build config 00:02:14.556 net/hns3: not in enabled drivers build config 00:02:14.556 net/i40e: not in enabled drivers build config 00:02:14.556 net/iavf: not in enabled drivers build config 00:02:14.556 net/ice: not in enabled drivers build config 00:02:14.556 net/idpf: not in enabled drivers build config 00:02:14.556 net/igc: not in enabled drivers build config 00:02:14.556 net/ionic: not in enabled drivers build config 00:02:14.556 net/ipn3ke: not in enabled drivers build config 00:02:14.556 net/ixgbe: not in enabled drivers build config 00:02:14.556 net/mana: not in enabled drivers build config 00:02:14.556 net/memif: not in enabled drivers build config 00:02:14.556 net/mlx4: not in enabled drivers build config 00:02:14.556 net/mlx5: not in enabled drivers build config 00:02:14.556 net/mvneta: not in enabled drivers build config 00:02:14.556 net/mvpp2: not in enabled drivers build config 00:02:14.556 net/netvsc: not in enabled drivers build config 00:02:14.556 net/nfb: not in enabled drivers build config 00:02:14.556 net/nfp: not in enabled drivers build config 00:02:14.556 net/ngbe: not in enabled drivers build config 00:02:14.556 net/null: not in enabled drivers build config 00:02:14.556 net/octeontx: not in enabled drivers build config 00:02:14.556 net/octeon_ep: not in enabled drivers build config 00:02:14.556 net/pcap: not in enabled drivers build config 00:02:14.556 net/pfe: not in enabled drivers build config 00:02:14.556 net/qede: not in enabled drivers build config 00:02:14.556 net/ring: not in enabled drivers build config 00:02:14.556 net/sfc: not in enabled drivers build config 00:02:14.556 net/softnic: not in enabled drivers build config 00:02:14.556 net/tap: not in enabled drivers build config 00:02:14.556 net/thunderx: not in enabled drivers build config 00:02:14.556 net/txgbe: not in enabled drivers build config 00:02:14.556 net/vdev_netvsc: not in enabled drivers build config 00:02:14.556 net/vhost: not in enabled drivers build config 00:02:14.556 net/virtio: not in enabled drivers build config 00:02:14.556 net/vmxnet3: not in enabled drivers build config 00:02:14.556 raw/*: missing internal dependency, "rawdev" 00:02:14.556 crypto/armv8: not in enabled drivers build config 00:02:14.556 crypto/bcmfs: not in enabled drivers build config 00:02:14.556 crypto/caam_jr: not in enabled drivers build config 00:02:14.556 crypto/ccp: not in enabled drivers build config 00:02:14.556 crypto/cnxk: not in enabled drivers build config 00:02:14.556 crypto/dpaa_sec: not in enabled drivers build config 00:02:14.556 crypto/dpaa2_sec: not in enabled drivers build config 00:02:14.556 crypto/ipsec_mb: not in enabled drivers build config 00:02:14.556 crypto/mlx5: not in enabled drivers build config 00:02:14.556 crypto/mvsam: not in enabled drivers build config 00:02:14.556 crypto/nitrox: not in enabled drivers build config 00:02:14.556 crypto/null: not in enabled drivers build config 00:02:14.556 crypto/octeontx: not in enabled drivers build config 00:02:14.556 crypto/openssl: not in enabled drivers build config 00:02:14.556 crypto/scheduler: not in enabled drivers build config 00:02:14.556 crypto/uadk: not in enabled drivers build config 00:02:14.556 crypto/virtio: not in enabled drivers build config 00:02:14.556 compress/isal: not in enabled drivers build config 00:02:14.556 compress/mlx5: not in enabled drivers build config 00:02:14.556 compress/nitrox: not in enabled drivers build config 00:02:14.556 compress/octeontx: not in enabled drivers build config 00:02:14.556 compress/zlib: not in enabled drivers build config 00:02:14.556 regex/*: missing internal dependency, "regexdev" 00:02:14.556 ml/*: missing internal dependency, "mldev" 00:02:14.556 vdpa/ifc: not in enabled drivers build config 00:02:14.556 vdpa/mlx5: not in enabled drivers build config 00:02:14.556 vdpa/nfp: not in enabled drivers build config 00:02:14.556 vdpa/sfc: not in enabled drivers build config 00:02:14.556 event/*: missing internal dependency, "eventdev" 00:02:14.556 baseband/*: missing internal dependency, "bbdev" 00:02:14.556 gpu/*: missing internal dependency, "gpudev" 00:02:14.556 00:02:14.556 00:02:14.556 Build targets in project: 84 00:02:14.556 00:02:14.556 DPDK 24.03.0 00:02:14.556 00:02:14.556 User defined options 00:02:14.556 buildtype : debug 00:02:14.556 default_library : shared 00:02:14.556 libdir : lib 00:02:14.556 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:14.556 b_sanitize : address 00:02:14.556 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:14.556 c_link_args : 00:02:14.556 cpu_instruction_set: native 00:02:14.556 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:14.556 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:14.556 enable_docs : false 00:02:14.556 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:14.556 enable_kmods : false 00:02:14.556 max_lcores : 128 00:02:14.556 tests : false 00:02:14.556 00:02:14.556 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:14.814 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:14.814 [1/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:14.814 [2/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:14.814 [3/267] Linking static target lib/librte_kvargs.a 00:02:15.072 [4/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:15.072 [5/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:15.072 [6/267] Linking static target lib/librte_log.a 00:02:15.331 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:15.331 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:15.331 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:15.331 [10/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:15.331 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:15.331 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:15.331 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:15.331 [14/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.331 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:15.331 [16/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:15.331 [17/267] Linking static target lib/librte_telemetry.a 00:02:15.610 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:15.610 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:15.610 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:15.610 [21/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:15.610 [22/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:15.920 [23/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.920 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:15.920 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:15.920 [26/267] Linking target lib/librte_log.so.24.1 00:02:15.920 [27/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:15.920 [28/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:15.920 [29/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:15.920 [30/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:15.920 [31/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:15.920 [32/267] Linking target lib/librte_kvargs.so.24.1 00:02:16.178 [33/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:16.178 [34/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:16.178 [35/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:16.178 [36/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:16.178 [37/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:16.178 [38/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.178 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:16.178 [40/267] Linking target lib/librte_telemetry.so.24.1 00:02:16.178 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:16.178 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:16.178 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:16.437 [44/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:16.437 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:16.437 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:16.437 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:16.437 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:16.437 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:16.695 [50/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:16.695 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:16.695 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:16.695 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:16.695 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:16.695 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:16.695 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:16.954 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:16.954 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:16.954 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:16.954 [60/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:16.954 [61/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:16.954 [62/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:17.213 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:17.213 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:17.213 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:17.213 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:17.213 [67/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:17.213 [68/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:17.471 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:17.471 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:17.471 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:17.471 [72/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:17.471 [73/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:17.471 [74/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:17.471 [75/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:17.471 [76/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:17.471 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:17.729 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:17.729 [79/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:17.729 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:17.729 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:17.729 [82/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:17.988 [83/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:17.988 [84/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:17.988 [85/267] Linking static target lib/librte_ring.a 00:02:17.988 [86/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:17.988 [87/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:17.988 [88/267] Linking static target lib/librte_eal.a 00:02:18.246 [89/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:18.246 [90/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:18.246 [91/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:18.246 [92/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:18.246 [93/267] Linking static target lib/librte_rcu.a 00:02:18.246 [94/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:18.506 [95/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:18.506 [96/267] Linking static target lib/librte_mempool.a 00:02:18.506 [97/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.506 [98/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:18.506 [99/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:18.506 [100/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:18.506 [101/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:18.763 [102/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:18.763 [103/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.763 [104/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:18.763 [105/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:18.763 [106/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:18.763 [107/267] Linking static target lib/librte_net.a 00:02:18.763 [108/267] Linking static target lib/librte_meter.a 00:02:19.021 [109/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:19.021 [110/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:19.021 [111/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:19.021 [112/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:19.279 [113/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:19.280 [114/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.280 [115/267] Linking static target lib/librte_mbuf.a 00:02:19.280 [116/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.280 [117/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:19.280 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:19.538 [119/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.538 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:19.538 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:19.797 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:19.797 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:19.797 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:19.797 [125/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:19.797 [126/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:19.797 [127/267] Linking static target lib/librte_pci.a 00:02:19.797 [128/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:20.055 [129/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:20.055 [130/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:20.055 [131/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:20.055 [132/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:20.055 [133/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.055 [134/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:20.055 [135/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:20.055 [136/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:20.055 [137/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:20.055 [138/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:20.055 [139/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:20.314 [140/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:20.314 [141/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.314 [142/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:20.314 [143/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:20.314 [144/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:20.314 [145/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:20.314 [146/267] Linking static target lib/librte_cmdline.a 00:02:20.572 [147/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:20.572 [148/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:20.572 [149/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:20.572 [150/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:20.831 [151/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:20.831 [152/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:20.831 [153/267] Linking static target lib/librte_timer.a 00:02:20.831 [154/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:20.831 [155/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:20.831 [156/267] Linking static target lib/librte_ethdev.a 00:02:20.831 [157/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:20.831 [158/267] Linking static target lib/librte_compressdev.a 00:02:20.831 [159/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:21.091 [160/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:21.091 [161/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:21.091 [162/267] Linking static target lib/librte_hash.a 00:02:21.091 [163/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:21.349 [164/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:21.349 [165/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:21.349 [166/267] Linking static target lib/librte_dmadev.a 00:02:21.349 [167/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.349 [168/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:21.349 [169/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:21.349 [170/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:21.608 [171/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:21.608 [172/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.608 [173/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.608 [174/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:21.867 [175/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:21.867 [176/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:21.867 [177/267] Linking static target lib/librte_cryptodev.a 00:02:21.867 [178/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:21.867 [179/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:21.867 [180/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.867 [181/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:21.867 [182/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.867 [183/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:22.126 [184/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:22.126 [185/267] Linking static target lib/librte_power.a 00:02:22.385 [186/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:22.385 [187/267] Linking static target lib/librte_reorder.a 00:02:22.385 [188/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:22.385 [189/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:22.385 [190/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:22.385 [191/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:22.385 [192/267] Linking static target lib/librte_security.a 00:02:22.643 [193/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:22.643 [194/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.901 [195/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.901 [196/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:23.160 [197/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.160 [198/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:23.160 [199/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:23.160 [200/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:23.419 [201/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:23.419 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:23.419 [203/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:23.419 [204/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:23.419 [205/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:23.676 [206/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:23.676 [207/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:23.676 [208/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:23.676 [209/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:23.676 [210/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.676 [211/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:23.676 [212/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:23.676 [213/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:23.676 [214/267] Linking static target drivers/librte_bus_pci.a 00:02:23.935 [215/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:23.935 [216/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:23.935 [217/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:23.935 [218/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:23.935 [219/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:23.935 [220/267] Linking static target drivers/librte_bus_vdev.a 00:02:23.935 [221/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:23.935 [222/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:23.935 [223/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:23.935 [224/267] Linking static target drivers/librte_mempool_ring.a 00:02:24.193 [225/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.193 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.452 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:25.826 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.826 [229/267] Linking target lib/librte_eal.so.24.1 00:02:25.826 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:25.826 [231/267] Linking target lib/librte_meter.so.24.1 00:02:25.826 [232/267] Linking target lib/librte_pci.so.24.1 00:02:25.826 [233/267] Linking target lib/librte_ring.so.24.1 00:02:25.826 [234/267] Linking target lib/librte_timer.so.24.1 00:02:25.827 [235/267] Linking target lib/librte_dmadev.so.24.1 00:02:25.827 [236/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:25.827 [237/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:25.827 [238/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:25.827 [239/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:25.827 [240/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:25.827 [241/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:25.827 [242/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:25.827 [243/267] Linking target lib/librte_rcu.so.24.1 00:02:25.827 [244/267] Linking target lib/librte_mempool.so.24.1 00:02:26.084 [245/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:26.084 [246/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:26.084 [247/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:26.084 [248/267] Linking target lib/librte_mbuf.so.24.1 00:02:26.084 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:26.342 [250/267] Linking target lib/librte_net.so.24.1 00:02:26.342 [251/267] Linking target lib/librte_cryptodev.so.24.1 00:02:26.342 [252/267] Linking target lib/librte_reorder.so.24.1 00:02:26.342 [253/267] Linking target lib/librte_compressdev.so.24.1 00:02:26.342 [254/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:26.342 [255/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:26.342 [256/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.342 [257/267] Linking target lib/librte_security.so.24.1 00:02:26.342 [258/267] Linking target lib/librte_cmdline.so.24.1 00:02:26.342 [259/267] Linking target lib/librte_hash.so.24.1 00:02:26.342 [260/267] Linking target lib/librte_ethdev.so.24.1 00:02:26.599 [261/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:26.599 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:26.599 [263/267] Linking target lib/librte_power.so.24.1 00:02:27.560 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:27.560 [265/267] Linking static target lib/librte_vhost.a 00:02:28.935 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.935 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:28.935 INFO: autodetecting backend as ninja 00:02:28.935 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:43.808 CC lib/log/log.o 00:02:43.808 CC lib/ut_mock/mock.o 00:02:43.808 CC lib/log/log_deprecated.o 00:02:43.808 CC lib/log/log_flags.o 00:02:43.808 CC lib/ut/ut.o 00:02:43.808 LIB libspdk_ut_mock.a 00:02:43.808 LIB libspdk_ut.a 00:02:43.808 LIB libspdk_log.a 00:02:43.808 SO libspdk_ut_mock.so.6.0 00:02:43.808 SO libspdk_ut.so.2.0 00:02:43.808 SO libspdk_log.so.7.1 00:02:43.808 SYMLINK libspdk_ut.so 00:02:43.808 SYMLINK libspdk_ut_mock.so 00:02:43.808 SYMLINK libspdk_log.so 00:02:43.808 CC lib/util/base64.o 00:02:43.808 CC lib/util/bit_array.o 00:02:43.808 CC lib/util/cpuset.o 00:02:43.808 CC lib/util/crc16.o 00:02:43.808 CXX lib/trace_parser/trace.o 00:02:43.808 CC lib/util/crc32.o 00:02:43.808 CC lib/ioat/ioat.o 00:02:43.808 CC lib/util/crc32c.o 00:02:43.808 CC lib/dma/dma.o 00:02:43.808 CC lib/vfio_user/host/vfio_user_pci.o 00:02:43.808 CC lib/util/crc32_ieee.o 00:02:43.808 CC lib/util/crc64.o 00:02:43.808 CC lib/util/dif.o 00:02:43.808 CC lib/vfio_user/host/vfio_user.o 00:02:43.808 LIB libspdk_dma.a 00:02:43.808 CC lib/util/fd.o 00:02:43.808 CC lib/util/fd_group.o 00:02:43.808 SO libspdk_dma.so.5.0 00:02:43.808 CC lib/util/file.o 00:02:43.808 CC lib/util/hexlify.o 00:02:43.808 LIB libspdk_ioat.a 00:02:43.808 SYMLINK libspdk_dma.so 00:02:43.808 CC lib/util/iov.o 00:02:43.808 SO libspdk_ioat.so.7.0 00:02:43.808 CC lib/util/math.o 00:02:43.808 SYMLINK libspdk_ioat.so 00:02:43.808 CC lib/util/net.o 00:02:43.808 CC lib/util/pipe.o 00:02:43.808 LIB libspdk_vfio_user.a 00:02:43.808 CC lib/util/strerror_tls.o 00:02:43.808 CC lib/util/string.o 00:02:43.808 SO libspdk_vfio_user.so.5.0 00:02:43.808 CC lib/util/uuid.o 00:02:43.808 SYMLINK libspdk_vfio_user.so 00:02:43.808 CC lib/util/xor.o 00:02:43.808 CC lib/util/zipf.o 00:02:43.808 CC lib/util/md5.o 00:02:43.808 LIB libspdk_util.a 00:02:43.808 SO libspdk_util.so.10.1 00:02:43.808 LIB libspdk_trace_parser.a 00:02:43.808 SO libspdk_trace_parser.so.6.0 00:02:43.808 SYMLINK libspdk_util.so 00:02:43.808 SYMLINK libspdk_trace_parser.so 00:02:43.808 CC lib/idxd/idxd.o 00:02:43.808 CC lib/idxd/idxd_user.o 00:02:43.808 CC lib/idxd/idxd_kernel.o 00:02:43.808 CC lib/conf/conf.o 00:02:43.808 CC lib/rdma_utils/rdma_utils.o 00:02:43.808 CC lib/vmd/vmd.o 00:02:43.808 CC lib/json/json_parse.o 00:02:43.808 CC lib/json/json_util.o 00:02:43.808 CC lib/vmd/led.o 00:02:43.808 CC lib/env_dpdk/env.o 00:02:43.808 CC lib/env_dpdk/memory.o 00:02:43.808 CC lib/env_dpdk/pci.o 00:02:43.808 CC lib/env_dpdk/init.o 00:02:43.808 LIB libspdk_conf.a 00:02:43.808 CC lib/json/json_write.o 00:02:43.808 CC lib/env_dpdk/threads.o 00:02:43.808 SO libspdk_conf.so.6.0 00:02:43.808 LIB libspdk_rdma_utils.a 00:02:43.808 SO libspdk_rdma_utils.so.1.0 00:02:43.808 SYMLINK libspdk_conf.so 00:02:43.808 CC lib/env_dpdk/pci_ioat.o 00:02:43.808 SYMLINK libspdk_rdma_utils.so 00:02:43.808 CC lib/env_dpdk/pci_virtio.o 00:02:43.808 CC lib/env_dpdk/pci_vmd.o 00:02:43.808 CC lib/env_dpdk/pci_idxd.o 00:02:43.808 CC lib/env_dpdk/pci_event.o 00:02:43.808 CC lib/env_dpdk/sigbus_handler.o 00:02:43.808 LIB libspdk_json.a 00:02:43.808 SO libspdk_json.so.6.0 00:02:43.808 CC lib/env_dpdk/pci_dpdk.o 00:02:43.808 CC lib/rdma_provider/common.o 00:02:43.808 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:43.808 SYMLINK libspdk_json.so 00:02:43.808 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:43.808 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:43.808 LIB libspdk_idxd.a 00:02:43.808 SO libspdk_idxd.so.12.1 00:02:43.808 LIB libspdk_vmd.a 00:02:43.808 SO libspdk_vmd.so.6.0 00:02:43.808 SYMLINK libspdk_idxd.so 00:02:43.808 CC lib/jsonrpc/jsonrpc_server.o 00:02:43.808 CC lib/jsonrpc/jsonrpc_client.o 00:02:43.808 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:43.808 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:43.808 LIB libspdk_rdma_provider.a 00:02:43.808 SYMLINK libspdk_vmd.so 00:02:43.808 SO libspdk_rdma_provider.so.7.0 00:02:43.808 SYMLINK libspdk_rdma_provider.so 00:02:44.070 LIB libspdk_jsonrpc.a 00:02:44.070 SO libspdk_jsonrpc.so.6.0 00:02:44.070 SYMLINK libspdk_jsonrpc.so 00:02:44.329 CC lib/rpc/rpc.o 00:02:44.329 LIB libspdk_env_dpdk.a 00:02:44.590 SO libspdk_env_dpdk.so.15.1 00:02:44.590 LIB libspdk_rpc.a 00:02:44.590 SO libspdk_rpc.so.6.0 00:02:44.590 SYMLINK libspdk_rpc.so 00:02:44.590 SYMLINK libspdk_env_dpdk.so 00:02:44.851 CC lib/trace/trace.o 00:02:44.851 CC lib/trace/trace_rpc.o 00:02:44.851 CC lib/trace/trace_flags.o 00:02:44.852 CC lib/notify/notify_rpc.o 00:02:44.852 CC lib/notify/notify.o 00:02:44.852 CC lib/keyring/keyring.o 00:02:44.852 CC lib/keyring/keyring_rpc.o 00:02:44.852 LIB libspdk_notify.a 00:02:45.114 SO libspdk_notify.so.6.0 00:02:45.114 SYMLINK libspdk_notify.so 00:02:45.114 LIB libspdk_keyring.a 00:02:45.114 LIB libspdk_trace.a 00:02:45.114 SO libspdk_keyring.so.2.0 00:02:45.114 SO libspdk_trace.so.11.0 00:02:45.114 SYMLINK libspdk_keyring.so 00:02:45.114 SYMLINK libspdk_trace.so 00:02:45.375 CC lib/sock/sock.o 00:02:45.375 CC lib/thread/thread.o 00:02:45.375 CC lib/sock/sock_rpc.o 00:02:45.375 CC lib/thread/iobuf.o 00:02:45.947 LIB libspdk_sock.a 00:02:45.947 SO libspdk_sock.so.10.0 00:02:45.947 SYMLINK libspdk_sock.so 00:02:46.208 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:46.208 CC lib/nvme/nvme_ctrlr.o 00:02:46.208 CC lib/nvme/nvme_fabric.o 00:02:46.208 CC lib/nvme/nvme_ns_cmd.o 00:02:46.208 CC lib/nvme/nvme_pcie_common.o 00:02:46.208 CC lib/nvme/nvme_ns.o 00:02:46.208 CC lib/nvme/nvme_qpair.o 00:02:46.208 CC lib/nvme/nvme_pcie.o 00:02:46.208 CC lib/nvme/nvme.o 00:02:46.774 CC lib/nvme/nvme_quirks.o 00:02:46.774 CC lib/nvme/nvme_transport.o 00:02:46.774 CC lib/nvme/nvme_discovery.o 00:02:46.774 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:47.033 LIB libspdk_thread.a 00:02:47.033 SO libspdk_thread.so.11.0 00:02:47.033 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:47.033 CC lib/nvme/nvme_tcp.o 00:02:47.033 CC lib/nvme/nvme_opal.o 00:02:47.033 SYMLINK libspdk_thread.so 00:02:47.033 CC lib/nvme/nvme_io_msg.o 00:02:47.292 CC lib/nvme/nvme_poll_group.o 00:02:47.292 CC lib/nvme/nvme_zns.o 00:02:47.292 CC lib/nvme/nvme_stubs.o 00:02:47.292 CC lib/nvme/nvme_auth.o 00:02:47.551 CC lib/nvme/nvme_cuse.o 00:02:47.551 CC lib/nvme/nvme_rdma.o 00:02:47.551 CC lib/accel/accel.o 00:02:47.551 CC lib/blob/blobstore.o 00:02:47.809 CC lib/accel/accel_rpc.o 00:02:47.809 CC lib/accel/accel_sw.o 00:02:47.809 CC lib/blob/request.o 00:02:47.809 CC lib/blob/zeroes.o 00:02:48.068 CC lib/blob/blob_bs_dev.o 00:02:48.325 CC lib/init/json_config.o 00:02:48.325 CC lib/init/subsystem.o 00:02:48.325 CC lib/init/subsystem_rpc.o 00:02:48.325 CC lib/init/rpc.o 00:02:48.325 LIB libspdk_init.a 00:02:48.583 CC lib/virtio/virtio.o 00:02:48.583 CC lib/virtio/virtio_vfio_user.o 00:02:48.583 CC lib/virtio/virtio_vhost_user.o 00:02:48.583 CC lib/virtio/virtio_pci.o 00:02:48.583 CC lib/fsdev/fsdev.o 00:02:48.583 CC lib/fsdev/fsdev_io.o 00:02:48.583 SO libspdk_init.so.6.0 00:02:48.583 SYMLINK libspdk_init.so 00:02:48.583 CC lib/event/app.o 00:02:48.583 CC lib/event/reactor.o 00:02:48.842 LIB libspdk_accel.a 00:02:48.842 CC lib/event/log_rpc.o 00:02:48.842 SO libspdk_accel.so.16.0 00:02:48.842 CC lib/event/app_rpc.o 00:02:48.842 LIB libspdk_virtio.a 00:02:48.842 SO libspdk_virtio.so.7.0 00:02:48.842 CC lib/fsdev/fsdev_rpc.o 00:02:48.842 SYMLINK libspdk_accel.so 00:02:48.842 SYMLINK libspdk_virtio.so 00:02:48.842 CC lib/event/scheduler_static.o 00:02:48.842 LIB libspdk_nvme.a 00:02:49.099 CC lib/bdev/bdev.o 00:02:49.099 CC lib/bdev/bdev_rpc.o 00:02:49.099 CC lib/bdev/bdev_zone.o 00:02:49.099 CC lib/bdev/part.o 00:02:49.099 CC lib/bdev/scsi_nvme.o 00:02:49.099 SO libspdk_nvme.so.15.0 00:02:49.099 LIB libspdk_fsdev.a 00:02:49.099 SO libspdk_fsdev.so.2.0 00:02:49.099 LIB libspdk_event.a 00:02:49.099 SO libspdk_event.so.14.0 00:02:49.099 SYMLINK libspdk_fsdev.so 00:02:49.357 SYMLINK libspdk_event.so 00:02:49.357 SYMLINK libspdk_nvme.so 00:02:49.357 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:49.927 LIB libspdk_fuse_dispatcher.a 00:02:49.927 SO libspdk_fuse_dispatcher.so.1.0 00:02:50.185 SYMLINK libspdk_fuse_dispatcher.so 00:02:51.122 LIB libspdk_blob.a 00:02:51.122 SO libspdk_blob.so.11.0 00:02:51.122 SYMLINK libspdk_blob.so 00:02:51.384 CC lib/blobfs/tree.o 00:02:51.384 CC lib/blobfs/blobfs.o 00:02:51.384 CC lib/lvol/lvol.o 00:02:51.646 LIB libspdk_bdev.a 00:02:51.906 SO libspdk_bdev.so.17.0 00:02:51.906 SYMLINK libspdk_bdev.so 00:02:52.165 CC lib/nvmf/ctrlr.o 00:02:52.165 CC lib/nvmf/ctrlr_discovery.o 00:02:52.165 CC lib/nvmf/ctrlr_bdev.o 00:02:52.165 CC lib/nvmf/subsystem.o 00:02:52.165 CC lib/ublk/ublk.o 00:02:52.165 CC lib/nbd/nbd.o 00:02:52.165 CC lib/ftl/ftl_core.o 00:02:52.165 CC lib/scsi/dev.o 00:02:52.165 LIB libspdk_blobfs.a 00:02:52.165 SO libspdk_blobfs.so.10.0 00:02:52.424 SYMLINK libspdk_blobfs.so 00:02:52.424 CC lib/scsi/lun.o 00:02:52.424 LIB libspdk_lvol.a 00:02:52.424 CC lib/ftl/ftl_init.o 00:02:52.424 SO libspdk_lvol.so.10.0 00:02:52.424 SYMLINK libspdk_lvol.so 00:02:52.424 CC lib/nbd/nbd_rpc.o 00:02:52.424 CC lib/ftl/ftl_layout.o 00:02:52.424 CC lib/nvmf/nvmf.o 00:02:52.424 CC lib/nvmf/nvmf_rpc.o 00:02:52.424 CC lib/nvmf/transport.o 00:02:52.424 LIB libspdk_nbd.a 00:02:52.424 SO libspdk_nbd.so.7.0 00:02:52.682 SYMLINK libspdk_nbd.so 00:02:52.682 CC lib/nvmf/tcp.o 00:02:52.682 CC lib/scsi/port.o 00:02:52.682 CC lib/ftl/ftl_debug.o 00:02:52.682 CC lib/scsi/scsi.o 00:02:52.682 CC lib/ublk/ublk_rpc.o 00:02:52.682 CC lib/nvmf/stubs.o 00:02:52.940 CC lib/scsi/scsi_bdev.o 00:02:52.940 LIB libspdk_ublk.a 00:02:52.940 CC lib/ftl/ftl_io.o 00:02:52.940 SO libspdk_ublk.so.3.0 00:02:52.940 SYMLINK libspdk_ublk.so 00:02:52.940 CC lib/scsi/scsi_pr.o 00:02:53.198 CC lib/ftl/ftl_sb.o 00:02:53.198 CC lib/nvmf/mdns_server.o 00:02:53.198 CC lib/ftl/ftl_l2p.o 00:02:53.198 CC lib/nvmf/rdma.o 00:02:53.198 CC lib/ftl/ftl_l2p_flat.o 00:02:53.198 CC lib/ftl/ftl_nv_cache.o 00:02:53.198 CC lib/ftl/ftl_band.o 00:02:53.457 CC lib/scsi/scsi_rpc.o 00:02:53.457 CC lib/ftl/ftl_band_ops.o 00:02:53.457 CC lib/nvmf/auth.o 00:02:53.457 CC lib/ftl/ftl_writer.o 00:02:53.457 CC lib/scsi/task.o 00:02:53.457 CC lib/ftl/ftl_rq.o 00:02:53.715 CC lib/ftl/ftl_reloc.o 00:02:53.715 LIB libspdk_scsi.a 00:02:53.715 CC lib/ftl/ftl_l2p_cache.o 00:02:53.715 CC lib/ftl/ftl_p2l.o 00:02:53.715 SO libspdk_scsi.so.9.0 00:02:53.715 CC lib/ftl/ftl_p2l_log.o 00:02:53.715 SYMLINK libspdk_scsi.so 00:02:53.715 CC lib/ftl/mngt/ftl_mngt.o 00:02:53.973 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:53.973 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:53.973 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:53.973 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:53.973 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:53.973 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:54.232 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:54.232 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:54.232 CC lib/iscsi/conn.o 00:02:54.232 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:54.232 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:54.232 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:54.232 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:54.232 CC lib/iscsi/init_grp.o 00:02:54.232 CC lib/iscsi/iscsi.o 00:02:54.232 CC lib/vhost/vhost.o 00:02:54.489 CC lib/ftl/utils/ftl_conf.o 00:02:54.489 CC lib/ftl/utils/ftl_md.o 00:02:54.489 CC lib/iscsi/param.o 00:02:54.489 CC lib/iscsi/portal_grp.o 00:02:54.489 CC lib/iscsi/tgt_node.o 00:02:54.489 CC lib/ftl/utils/ftl_mempool.o 00:02:54.489 CC lib/ftl/utils/ftl_bitmap.o 00:02:54.747 CC lib/vhost/vhost_rpc.o 00:02:54.748 CC lib/vhost/vhost_scsi.o 00:02:54.748 CC lib/vhost/vhost_blk.o 00:02:54.748 CC lib/ftl/utils/ftl_property.o 00:02:54.748 CC lib/iscsi/iscsi_subsystem.o 00:02:54.748 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:55.006 CC lib/vhost/rte_vhost_user.o 00:02:55.006 CC lib/iscsi/iscsi_rpc.o 00:02:55.006 CC lib/iscsi/task.o 00:02:55.006 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:55.264 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:55.264 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:55.264 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:55.264 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:55.264 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:55.264 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:55.264 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:55.264 LIB libspdk_nvmf.a 00:02:55.264 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:55.522 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:55.522 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:55.522 SO libspdk_nvmf.so.20.0 00:02:55.522 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:55.522 CC lib/ftl/base/ftl_base_dev.o 00:02:55.522 CC lib/ftl/base/ftl_base_bdev.o 00:02:55.522 CC lib/ftl/ftl_trace.o 00:02:55.522 SYMLINK libspdk_nvmf.so 00:02:55.781 LIB libspdk_vhost.a 00:02:55.781 LIB libspdk_iscsi.a 00:02:55.781 SO libspdk_vhost.so.8.0 00:02:55.781 LIB libspdk_ftl.a 00:02:55.781 SO libspdk_iscsi.so.8.0 00:02:55.781 SYMLINK libspdk_vhost.so 00:02:56.107 SYMLINK libspdk_iscsi.so 00:02:56.107 SO libspdk_ftl.so.9.0 00:02:56.107 SYMLINK libspdk_ftl.so 00:02:56.366 CC module/env_dpdk/env_dpdk_rpc.o 00:02:56.625 CC module/scheduler/gscheduler/gscheduler.o 00:02:56.625 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:56.625 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:56.625 CC module/sock/posix/posix.o 00:02:56.625 CC module/accel/error/accel_error.o 00:02:56.625 CC module/blob/bdev/blob_bdev.o 00:02:56.625 CC module/fsdev/aio/fsdev_aio.o 00:02:56.625 CC module/accel/ioat/accel_ioat.o 00:02:56.625 CC module/keyring/file/keyring.o 00:02:56.625 LIB libspdk_env_dpdk_rpc.a 00:02:56.625 SO libspdk_env_dpdk_rpc.so.6.0 00:02:56.625 CC module/keyring/file/keyring_rpc.o 00:02:56.625 LIB libspdk_scheduler_gscheduler.a 00:02:56.625 SYMLINK libspdk_env_dpdk_rpc.so 00:02:56.625 CC module/accel/ioat/accel_ioat_rpc.o 00:02:56.625 LIB libspdk_scheduler_dpdk_governor.a 00:02:56.625 SO libspdk_scheduler_gscheduler.so.4.0 00:02:56.625 CC module/accel/error/accel_error_rpc.o 00:02:56.625 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:56.625 LIB libspdk_scheduler_dynamic.a 00:02:56.625 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:56.625 SO libspdk_scheduler_dynamic.so.4.0 00:02:56.625 SYMLINK libspdk_scheduler_gscheduler.so 00:02:56.625 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:56.625 CC module/fsdev/aio/linux_aio_mgr.o 00:02:56.625 LIB libspdk_keyring_file.a 00:02:56.625 SO libspdk_keyring_file.so.2.0 00:02:56.625 SYMLINK libspdk_scheduler_dynamic.so 00:02:56.883 LIB libspdk_accel_ioat.a 00:02:56.883 LIB libspdk_blob_bdev.a 00:02:56.883 SYMLINK libspdk_keyring_file.so 00:02:56.883 SO libspdk_accel_ioat.so.6.0 00:02:56.883 LIB libspdk_accel_error.a 00:02:56.883 SO libspdk_blob_bdev.so.11.0 00:02:56.883 SO libspdk_accel_error.so.2.0 00:02:56.883 SYMLINK libspdk_accel_ioat.so 00:02:56.883 SYMLINK libspdk_blob_bdev.so 00:02:56.883 SYMLINK libspdk_accel_error.so 00:02:56.883 CC module/accel/dsa/accel_dsa.o 00:02:56.883 CC module/accel/dsa/accel_dsa_rpc.o 00:02:56.883 CC module/accel/iaa/accel_iaa.o 00:02:56.883 CC module/accel/iaa/accel_iaa_rpc.o 00:02:56.883 CC module/keyring/linux/keyring.o 00:02:56.883 CC module/keyring/linux/keyring_rpc.o 00:02:57.142 LIB libspdk_keyring_linux.a 00:02:57.142 CC module/blobfs/bdev/blobfs_bdev.o 00:02:57.142 CC module/bdev/delay/vbdev_delay.o 00:02:57.142 SO libspdk_keyring_linux.so.1.0 00:02:57.142 LIB libspdk_accel_iaa.a 00:02:57.142 LIB libspdk_sock_posix.a 00:02:57.142 SO libspdk_accel_iaa.so.3.0 00:02:57.142 CC module/bdev/error/vbdev_error.o 00:02:57.142 LIB libspdk_accel_dsa.a 00:02:57.142 SO libspdk_sock_posix.so.6.0 00:02:57.142 SYMLINK libspdk_keyring_linux.so 00:02:57.142 CC module/bdev/error/vbdev_error_rpc.o 00:02:57.142 SO libspdk_accel_dsa.so.5.0 00:02:57.142 CC module/bdev/gpt/gpt.o 00:02:57.142 SYMLINK libspdk_accel_iaa.so 00:02:57.142 CC module/bdev/lvol/vbdev_lvol.o 00:02:57.142 LIB libspdk_fsdev_aio.a 00:02:57.142 SYMLINK libspdk_sock_posix.so 00:02:57.142 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:57.142 SYMLINK libspdk_accel_dsa.so 00:02:57.142 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:57.142 SO libspdk_fsdev_aio.so.1.0 00:02:57.142 CC module/bdev/gpt/vbdev_gpt.o 00:02:57.401 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:57.401 SYMLINK libspdk_fsdev_aio.so 00:02:57.401 CC module/bdev/malloc/bdev_malloc.o 00:02:57.401 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:57.401 LIB libspdk_blobfs_bdev.a 00:02:57.401 LIB libspdk_bdev_error.a 00:02:57.401 SO libspdk_blobfs_bdev.so.6.0 00:02:57.401 LIB libspdk_bdev_delay.a 00:02:57.401 SO libspdk_bdev_error.so.6.0 00:02:57.401 SO libspdk_bdev_delay.so.6.0 00:02:57.401 CC module/bdev/null/bdev_null.o 00:02:57.401 SYMLINK libspdk_blobfs_bdev.so 00:02:57.401 CC module/bdev/null/bdev_null_rpc.o 00:02:57.401 SYMLINK libspdk_bdev_error.so 00:02:57.401 SYMLINK libspdk_bdev_delay.so 00:02:57.401 CC module/bdev/nvme/bdev_nvme.o 00:02:57.401 LIB libspdk_bdev_gpt.a 00:02:57.659 SO libspdk_bdev_gpt.so.6.0 00:02:57.659 SYMLINK libspdk_bdev_gpt.so 00:02:57.659 CC module/bdev/raid/bdev_raid.o 00:02:57.659 CC module/bdev/passthru/vbdev_passthru.o 00:02:57.659 CC module/bdev/raid/bdev_raid_rpc.o 00:02:57.659 LIB libspdk_bdev_malloc.a 00:02:57.659 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:57.659 SO libspdk_bdev_malloc.so.6.0 00:02:57.659 CC module/bdev/split/vbdev_split.o 00:02:57.659 LIB libspdk_bdev_null.a 00:02:57.659 LIB libspdk_bdev_lvol.a 00:02:57.659 SYMLINK libspdk_bdev_malloc.so 00:02:57.659 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:57.659 CC module/bdev/xnvme/bdev_xnvme.o 00:02:57.659 SO libspdk_bdev_null.so.6.0 00:02:57.659 SO libspdk_bdev_lvol.so.6.0 00:02:57.917 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:02:57.917 SYMLINK libspdk_bdev_null.so 00:02:57.917 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:57.917 SYMLINK libspdk_bdev_lvol.so 00:02:57.917 CC module/bdev/split/vbdev_split_rpc.o 00:02:57.917 CC module/bdev/nvme/nvme_rpc.o 00:02:57.917 CC module/bdev/nvme/bdev_mdns_client.o 00:02:57.917 LIB libspdk_bdev_zone_block.a 00:02:57.917 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:57.917 CC module/bdev/nvme/vbdev_opal.o 00:02:57.917 SO libspdk_bdev_zone_block.so.6.0 00:02:57.917 LIB libspdk_bdev_split.a 00:02:57.917 SO libspdk_bdev_split.so.6.0 00:02:57.917 SYMLINK libspdk_bdev_zone_block.so 00:02:57.917 LIB libspdk_bdev_xnvme.a 00:02:57.917 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:57.917 SYMLINK libspdk_bdev_split.so 00:02:57.917 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:58.175 LIB libspdk_bdev_passthru.a 00:02:58.175 SO libspdk_bdev_xnvme.so.3.0 00:02:58.175 CC module/bdev/raid/bdev_raid_sb.o 00:02:58.175 SO libspdk_bdev_passthru.so.6.0 00:02:58.175 SYMLINK libspdk_bdev_xnvme.so 00:02:58.175 CC module/bdev/raid/raid0.o 00:02:58.175 CC module/bdev/raid/raid1.o 00:02:58.175 SYMLINK libspdk_bdev_passthru.so 00:02:58.175 CC module/bdev/raid/concat.o 00:02:58.175 CC module/bdev/aio/bdev_aio.o 00:02:58.433 CC module/bdev/ftl/bdev_ftl.o 00:02:58.433 CC module/bdev/aio/bdev_aio_rpc.o 00:02:58.433 CC module/bdev/iscsi/bdev_iscsi.o 00:02:58.433 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:58.433 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:58.433 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:58.433 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:58.433 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:58.433 LIB libspdk_bdev_raid.a 00:02:58.433 SO libspdk_bdev_raid.so.6.0 00:02:58.692 SYMLINK libspdk_bdev_raid.so 00:02:58.692 LIB libspdk_bdev_ftl.a 00:02:58.692 SO libspdk_bdev_ftl.so.6.0 00:02:58.692 LIB libspdk_bdev_aio.a 00:02:58.692 SO libspdk_bdev_aio.so.6.0 00:02:58.692 LIB libspdk_bdev_iscsi.a 00:02:58.692 SYMLINK libspdk_bdev_ftl.so 00:02:58.692 SO libspdk_bdev_iscsi.so.6.0 00:02:58.692 SYMLINK libspdk_bdev_aio.so 00:02:58.692 SYMLINK libspdk_bdev_iscsi.so 00:02:58.952 LIB libspdk_bdev_virtio.a 00:02:58.952 SO libspdk_bdev_virtio.so.6.0 00:02:58.952 SYMLINK libspdk_bdev_virtio.so 00:03:00.337 LIB libspdk_bdev_nvme.a 00:03:00.337 SO libspdk_bdev_nvme.so.7.1 00:03:00.337 SYMLINK libspdk_bdev_nvme.so 00:03:00.597 CC module/event/subsystems/sock/sock.o 00:03:00.597 CC module/event/subsystems/scheduler/scheduler.o 00:03:00.597 CC module/event/subsystems/iobuf/iobuf.o 00:03:00.597 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:00.597 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:00.597 CC module/event/subsystems/keyring/keyring.o 00:03:00.597 CC module/event/subsystems/fsdev/fsdev.o 00:03:00.597 CC module/event/subsystems/vmd/vmd.o 00:03:00.597 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:00.857 LIB libspdk_event_keyring.a 00:03:00.857 LIB libspdk_event_scheduler.a 00:03:00.857 LIB libspdk_event_sock.a 00:03:00.857 LIB libspdk_event_vhost_blk.a 00:03:00.857 LIB libspdk_event_fsdev.a 00:03:00.857 SO libspdk_event_keyring.so.1.0 00:03:00.857 LIB libspdk_event_iobuf.a 00:03:00.857 SO libspdk_event_sock.so.5.0 00:03:00.857 LIB libspdk_event_vmd.a 00:03:00.857 SO libspdk_event_scheduler.so.4.0 00:03:00.857 SO libspdk_event_vhost_blk.so.3.0 00:03:00.857 SO libspdk_event_fsdev.so.1.0 00:03:00.857 SO libspdk_event_vmd.so.6.0 00:03:00.857 SO libspdk_event_iobuf.so.3.0 00:03:00.857 SYMLINK libspdk_event_keyring.so 00:03:00.857 SYMLINK libspdk_event_sock.so 00:03:00.857 SYMLINK libspdk_event_vhost_blk.so 00:03:00.857 SYMLINK libspdk_event_scheduler.so 00:03:00.857 SYMLINK libspdk_event_fsdev.so 00:03:00.857 SYMLINK libspdk_event_vmd.so 00:03:00.857 SYMLINK libspdk_event_iobuf.so 00:03:01.118 CC module/event/subsystems/accel/accel.o 00:03:01.379 LIB libspdk_event_accel.a 00:03:01.379 SO libspdk_event_accel.so.6.0 00:03:01.379 SYMLINK libspdk_event_accel.so 00:03:01.641 CC module/event/subsystems/bdev/bdev.o 00:03:01.641 LIB libspdk_event_bdev.a 00:03:01.641 SO libspdk_event_bdev.so.6.0 00:03:01.901 SYMLINK libspdk_event_bdev.so 00:03:01.901 CC module/event/subsystems/scsi/scsi.o 00:03:01.902 CC module/event/subsystems/nbd/nbd.o 00:03:01.902 CC module/event/subsystems/ublk/ublk.o 00:03:01.902 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:01.902 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:02.162 LIB libspdk_event_nbd.a 00:03:02.162 LIB libspdk_event_scsi.a 00:03:02.162 LIB libspdk_event_ublk.a 00:03:02.162 SO libspdk_event_nbd.so.6.0 00:03:02.162 SO libspdk_event_scsi.so.6.0 00:03:02.162 SO libspdk_event_ublk.so.3.0 00:03:02.162 SYMLINK libspdk_event_nbd.so 00:03:02.162 SYMLINK libspdk_event_scsi.so 00:03:02.162 SYMLINK libspdk_event_ublk.so 00:03:02.162 LIB libspdk_event_nvmf.a 00:03:02.162 SO libspdk_event_nvmf.so.6.0 00:03:02.162 SYMLINK libspdk_event_nvmf.so 00:03:02.422 CC module/event/subsystems/iscsi/iscsi.o 00:03:02.422 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:02.422 LIB libspdk_event_vhost_scsi.a 00:03:02.422 LIB libspdk_event_iscsi.a 00:03:02.422 SO libspdk_event_vhost_scsi.so.3.0 00:03:02.422 SO libspdk_event_iscsi.so.6.0 00:03:02.683 SYMLINK libspdk_event_vhost_scsi.so 00:03:02.683 SYMLINK libspdk_event_iscsi.so 00:03:02.683 SO libspdk.so.6.0 00:03:02.683 SYMLINK libspdk.so 00:03:02.946 CXX app/trace/trace.o 00:03:02.946 CC app/trace_record/trace_record.o 00:03:02.946 CC app/spdk_lspci/spdk_lspci.o 00:03:02.946 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:02.946 CC app/nvmf_tgt/nvmf_main.o 00:03:02.946 CC app/iscsi_tgt/iscsi_tgt.o 00:03:02.946 CC examples/ioat/perf/perf.o 00:03:02.946 CC app/spdk_tgt/spdk_tgt.o 00:03:02.946 CC examples/util/zipf/zipf.o 00:03:02.946 CC test/thread/poller_perf/poller_perf.o 00:03:02.946 LINK spdk_lspci 00:03:03.208 LINK interrupt_tgt 00:03:03.208 LINK nvmf_tgt 00:03:03.208 LINK poller_perf 00:03:03.208 LINK zipf 00:03:03.208 LINK spdk_trace_record 00:03:03.208 LINK iscsi_tgt 00:03:03.208 LINK spdk_tgt 00:03:03.208 LINK ioat_perf 00:03:03.208 LINK spdk_trace 00:03:03.208 CC app/spdk_nvme_perf/perf.o 00:03:03.469 CC app/spdk_nvme_discover/discovery_aer.o 00:03:03.469 CC app/spdk_nvme_identify/identify.o 00:03:03.469 CC app/spdk_top/spdk_top.o 00:03:03.469 CC examples/ioat/verify/verify.o 00:03:03.469 TEST_HEADER include/spdk/accel.h 00:03:03.469 TEST_HEADER include/spdk/accel_module.h 00:03:03.469 TEST_HEADER include/spdk/assert.h 00:03:03.469 TEST_HEADER include/spdk/barrier.h 00:03:03.469 TEST_HEADER include/spdk/base64.h 00:03:03.469 TEST_HEADER include/spdk/bdev.h 00:03:03.469 TEST_HEADER include/spdk/bdev_module.h 00:03:03.469 TEST_HEADER include/spdk/bdev_zone.h 00:03:03.469 TEST_HEADER include/spdk/bit_array.h 00:03:03.469 TEST_HEADER include/spdk/bit_pool.h 00:03:03.469 TEST_HEADER include/spdk/blob_bdev.h 00:03:03.469 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:03.469 TEST_HEADER include/spdk/blobfs.h 00:03:03.469 TEST_HEADER include/spdk/blob.h 00:03:03.469 TEST_HEADER include/spdk/conf.h 00:03:03.469 TEST_HEADER include/spdk/config.h 00:03:03.469 TEST_HEADER include/spdk/cpuset.h 00:03:03.469 TEST_HEADER include/spdk/crc16.h 00:03:03.469 TEST_HEADER include/spdk/crc32.h 00:03:03.469 TEST_HEADER include/spdk/crc64.h 00:03:03.469 CC test/dma/test_dma/test_dma.o 00:03:03.469 TEST_HEADER include/spdk/dif.h 00:03:03.469 TEST_HEADER include/spdk/dma.h 00:03:03.469 TEST_HEADER include/spdk/endian.h 00:03:03.469 TEST_HEADER include/spdk/env_dpdk.h 00:03:03.469 TEST_HEADER include/spdk/env.h 00:03:03.469 TEST_HEADER include/spdk/event.h 00:03:03.469 TEST_HEADER include/spdk/fd_group.h 00:03:03.469 TEST_HEADER include/spdk/fd.h 00:03:03.469 TEST_HEADER include/spdk/file.h 00:03:03.469 TEST_HEADER include/spdk/fsdev.h 00:03:03.469 TEST_HEADER include/spdk/fsdev_module.h 00:03:03.469 TEST_HEADER include/spdk/ftl.h 00:03:03.469 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:03.469 TEST_HEADER include/spdk/gpt_spec.h 00:03:03.469 TEST_HEADER include/spdk/hexlify.h 00:03:03.469 TEST_HEADER include/spdk/histogram_data.h 00:03:03.469 CC test/app/bdev_svc/bdev_svc.o 00:03:03.469 TEST_HEADER include/spdk/idxd.h 00:03:03.469 TEST_HEADER include/spdk/idxd_spec.h 00:03:03.469 TEST_HEADER include/spdk/init.h 00:03:03.469 TEST_HEADER include/spdk/ioat.h 00:03:03.469 TEST_HEADER include/spdk/ioat_spec.h 00:03:03.469 TEST_HEADER include/spdk/iscsi_spec.h 00:03:03.469 TEST_HEADER include/spdk/json.h 00:03:03.469 TEST_HEADER include/spdk/jsonrpc.h 00:03:03.469 TEST_HEADER include/spdk/keyring.h 00:03:03.469 TEST_HEADER include/spdk/keyring_module.h 00:03:03.469 TEST_HEADER include/spdk/likely.h 00:03:03.469 TEST_HEADER include/spdk/log.h 00:03:03.469 CC test/event/event_perf/event_perf.o 00:03:03.469 TEST_HEADER include/spdk/lvol.h 00:03:03.469 TEST_HEADER include/spdk/md5.h 00:03:03.469 TEST_HEADER include/spdk/memory.h 00:03:03.469 TEST_HEADER include/spdk/mmio.h 00:03:03.469 TEST_HEADER include/spdk/nbd.h 00:03:03.469 TEST_HEADER include/spdk/net.h 00:03:03.469 TEST_HEADER include/spdk/notify.h 00:03:03.469 TEST_HEADER include/spdk/nvme.h 00:03:03.469 TEST_HEADER include/spdk/nvme_intel.h 00:03:03.469 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:03.469 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:03.469 TEST_HEADER include/spdk/nvme_spec.h 00:03:03.469 LINK spdk_nvme_discover 00:03:03.469 TEST_HEADER include/spdk/nvme_zns.h 00:03:03.469 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:03.469 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:03.469 TEST_HEADER include/spdk/nvmf.h 00:03:03.469 TEST_HEADER include/spdk/nvmf_spec.h 00:03:03.469 TEST_HEADER include/spdk/nvmf_transport.h 00:03:03.469 TEST_HEADER include/spdk/opal.h 00:03:03.469 TEST_HEADER include/spdk/opal_spec.h 00:03:03.469 TEST_HEADER include/spdk/pci_ids.h 00:03:03.469 TEST_HEADER include/spdk/pipe.h 00:03:03.469 CC test/env/mem_callbacks/mem_callbacks.o 00:03:03.469 TEST_HEADER include/spdk/queue.h 00:03:03.469 TEST_HEADER include/spdk/reduce.h 00:03:03.469 TEST_HEADER include/spdk/rpc.h 00:03:03.469 TEST_HEADER include/spdk/scheduler.h 00:03:03.469 TEST_HEADER include/spdk/scsi.h 00:03:03.469 TEST_HEADER include/spdk/scsi_spec.h 00:03:03.469 TEST_HEADER include/spdk/sock.h 00:03:03.469 TEST_HEADER include/spdk/stdinc.h 00:03:03.469 TEST_HEADER include/spdk/string.h 00:03:03.469 TEST_HEADER include/spdk/thread.h 00:03:03.469 TEST_HEADER include/spdk/trace.h 00:03:03.469 TEST_HEADER include/spdk/trace_parser.h 00:03:03.469 TEST_HEADER include/spdk/tree.h 00:03:03.469 TEST_HEADER include/spdk/ublk.h 00:03:03.469 TEST_HEADER include/spdk/util.h 00:03:03.469 TEST_HEADER include/spdk/uuid.h 00:03:03.469 TEST_HEADER include/spdk/version.h 00:03:03.469 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:03.469 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:03.469 TEST_HEADER include/spdk/vhost.h 00:03:03.469 TEST_HEADER include/spdk/vmd.h 00:03:03.469 TEST_HEADER include/spdk/xor.h 00:03:03.729 TEST_HEADER include/spdk/zipf.h 00:03:03.729 CXX test/cpp_headers/accel.o 00:03:03.729 LINK verify 00:03:03.729 LINK event_perf 00:03:03.729 CXX test/cpp_headers/accel_module.o 00:03:03.729 LINK bdev_svc 00:03:03.729 CXX test/cpp_headers/assert.o 00:03:03.729 CC test/event/reactor/reactor.o 00:03:03.990 CC examples/sock/hello_world/hello_sock.o 00:03:03.990 CC examples/thread/thread/thread_ex.o 00:03:03.990 LINK test_dma 00:03:03.990 LINK spdk_nvme_perf 00:03:03.990 LINK reactor 00:03:03.990 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:03.990 CXX test/cpp_headers/barrier.o 00:03:03.990 LINK mem_callbacks 00:03:04.250 LINK thread 00:03:04.250 LINK hello_sock 00:03:04.250 LINK spdk_nvme_identify 00:03:04.250 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:04.250 CXX test/cpp_headers/base64.o 00:03:04.250 CC test/event/reactor_perf/reactor_perf.o 00:03:04.250 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:04.250 CC test/env/vtophys/vtophys.o 00:03:04.250 CXX test/cpp_headers/bdev.o 00:03:04.250 LINK spdk_top 00:03:04.250 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:04.250 LINK reactor_perf 00:03:04.250 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:04.250 LINK nvme_fuzz 00:03:04.539 CC test/event/app_repeat/app_repeat.o 00:03:04.539 LINK vtophys 00:03:04.539 CXX test/cpp_headers/bdev_module.o 00:03:04.539 CC examples/vmd/lsvmd/lsvmd.o 00:03:04.539 LINK env_dpdk_post_init 00:03:04.539 CC examples/vmd/led/led.o 00:03:04.539 CC app/spdk_dd/spdk_dd.o 00:03:04.539 LINK app_repeat 00:03:04.539 LINK lsvmd 00:03:04.539 CC test/app/histogram_perf/histogram_perf.o 00:03:04.539 CXX test/cpp_headers/bdev_zone.o 00:03:04.539 CC test/event/scheduler/scheduler.o 00:03:04.845 LINK led 00:03:04.845 CC test/env/pci/pci_ut.o 00:03:04.845 CC test/env/memory/memory_ut.o 00:03:04.845 LINK histogram_perf 00:03:04.845 CXX test/cpp_headers/bit_array.o 00:03:04.845 CC test/app/jsoncat/jsoncat.o 00:03:04.845 LINK vhost_fuzz 00:03:04.845 LINK scheduler 00:03:04.845 LINK spdk_dd 00:03:04.845 CXX test/cpp_headers/bit_pool.o 00:03:04.845 LINK jsoncat 00:03:04.845 CC test/app/stub/stub.o 00:03:04.845 CC examples/idxd/perf/perf.o 00:03:05.106 LINK pci_ut 00:03:05.106 CXX test/cpp_headers/blob_bdev.o 00:03:05.106 CXX test/cpp_headers/blobfs_bdev.o 00:03:05.106 CC test/rpc_client/rpc_client_test.o 00:03:05.106 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:05.106 LINK stub 00:03:05.106 CXX test/cpp_headers/blobfs.o 00:03:05.106 CC app/fio/nvme/fio_plugin.o 00:03:05.364 CC app/fio/bdev/fio_plugin.o 00:03:05.364 LINK rpc_client_test 00:03:05.364 LINK idxd_perf 00:03:05.365 CXX test/cpp_headers/blob.o 00:03:05.365 LINK hello_fsdev 00:03:05.365 CC examples/accel/perf/accel_perf.o 00:03:05.365 CXX test/cpp_headers/conf.o 00:03:05.365 CC test/accel/dif/dif.o 00:03:05.365 CXX test/cpp_headers/config.o 00:03:05.365 CXX test/cpp_headers/cpuset.o 00:03:05.365 CXX test/cpp_headers/crc16.o 00:03:05.623 CC app/vhost/vhost.o 00:03:05.623 CXX test/cpp_headers/crc32.o 00:03:05.623 LINK spdk_nvme 00:03:05.623 CC examples/blob/hello_world/hello_blob.o 00:03:05.623 CC examples/blob/cli/blobcli.o 00:03:05.623 LINK vhost 00:03:05.623 LINK spdk_bdev 00:03:05.883 CXX test/cpp_headers/crc64.o 00:03:05.883 LINK memory_ut 00:03:05.883 LINK iscsi_fuzz 00:03:05.883 LINK accel_perf 00:03:05.883 CXX test/cpp_headers/dif.o 00:03:05.883 CC examples/nvme/hello_world/hello_world.o 00:03:05.883 LINK hello_blob 00:03:05.883 CC examples/nvme/reconnect/reconnect.o 00:03:05.883 CXX test/cpp_headers/dma.o 00:03:05.883 CXX test/cpp_headers/endian.o 00:03:05.883 CXX test/cpp_headers/env_dpdk.o 00:03:05.883 CXX test/cpp_headers/env.o 00:03:06.143 CXX test/cpp_headers/event.o 00:03:06.143 CXX test/cpp_headers/fd_group.o 00:03:06.143 CXX test/cpp_headers/fd.o 00:03:06.143 LINK dif 00:03:06.143 LINK hello_world 00:03:06.143 CXX test/cpp_headers/file.o 00:03:06.143 CXX test/cpp_headers/fsdev.o 00:03:06.143 LINK blobcli 00:03:06.143 CXX test/cpp_headers/fsdev_module.o 00:03:06.143 CXX test/cpp_headers/ftl.o 00:03:06.143 CC examples/bdev/hello_world/hello_bdev.o 00:03:06.143 LINK reconnect 00:03:06.143 CC test/blobfs/mkfs/mkfs.o 00:03:06.404 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:06.404 CC examples/nvme/arbitration/arbitration.o 00:03:06.404 CC test/lvol/esnap/esnap.o 00:03:06.404 CC test/nvme/aer/aer.o 00:03:06.404 CXX test/cpp_headers/fuse_dispatcher.o 00:03:06.404 CC examples/nvme/hotplug/hotplug.o 00:03:06.404 CC test/nvme/reset/reset.o 00:03:06.404 LINK mkfs 00:03:06.404 LINK hello_bdev 00:03:06.404 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:06.404 CXX test/cpp_headers/gpt_spec.o 00:03:06.663 LINK cmb_copy 00:03:06.663 LINK hotplug 00:03:06.663 LINK arbitration 00:03:06.663 CC examples/nvme/abort/abort.o 00:03:06.663 CXX test/cpp_headers/hexlify.o 00:03:06.663 LINK reset 00:03:06.663 LINK aer 00:03:06.663 CXX test/cpp_headers/histogram_data.o 00:03:06.663 CC examples/bdev/bdevperf/bdevperf.o 00:03:06.921 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:06.921 LINK nvme_manage 00:03:06.921 CC test/nvme/sgl/sgl.o 00:03:06.921 CC test/nvme/e2edp/nvme_dp.o 00:03:06.921 CXX test/cpp_headers/idxd.o 00:03:06.921 CXX test/cpp_headers/idxd_spec.o 00:03:06.921 CC test/nvme/overhead/overhead.o 00:03:06.921 LINK pmr_persistence 00:03:06.921 CC test/bdev/bdevio/bdevio.o 00:03:06.921 LINK abort 00:03:06.921 CXX test/cpp_headers/init.o 00:03:07.179 CXX test/cpp_headers/ioat.o 00:03:07.179 LINK sgl 00:03:07.179 CXX test/cpp_headers/ioat_spec.o 00:03:07.179 LINK overhead 00:03:07.179 LINK nvme_dp 00:03:07.179 CXX test/cpp_headers/iscsi_spec.o 00:03:07.179 CXX test/cpp_headers/json.o 00:03:07.179 CXX test/cpp_headers/jsonrpc.o 00:03:07.179 CXX test/cpp_headers/keyring.o 00:03:07.179 CXX test/cpp_headers/keyring_module.o 00:03:07.179 CXX test/cpp_headers/likely.o 00:03:07.179 CC test/nvme/err_injection/err_injection.o 00:03:07.438 CC test/nvme/startup/startup.o 00:03:07.438 LINK bdevio 00:03:07.438 CC test/nvme/reserve/reserve.o 00:03:07.438 CXX test/cpp_headers/log.o 00:03:07.438 CC test/nvme/simple_copy/simple_copy.o 00:03:07.438 CC test/nvme/connect_stress/connect_stress.o 00:03:07.438 CC test/nvme/boot_partition/boot_partition.o 00:03:07.438 LINK err_injection 00:03:07.438 LINK bdevperf 00:03:07.438 LINK startup 00:03:07.438 CXX test/cpp_headers/lvol.o 00:03:07.438 CXX test/cpp_headers/md5.o 00:03:07.438 CXX test/cpp_headers/memory.o 00:03:07.438 LINK boot_partition 00:03:07.438 LINK reserve 00:03:07.696 LINK connect_stress 00:03:07.696 CXX test/cpp_headers/mmio.o 00:03:07.696 LINK simple_copy 00:03:07.696 CC test/nvme/compliance/nvme_compliance.o 00:03:07.696 CXX test/cpp_headers/nbd.o 00:03:07.696 CC test/nvme/fused_ordering/fused_ordering.o 00:03:07.696 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:07.696 CXX test/cpp_headers/net.o 00:03:07.696 CC test/nvme/fdp/fdp.o 00:03:07.696 CC test/nvme/cuse/cuse.o 00:03:07.696 CXX test/cpp_headers/notify.o 00:03:07.696 CC examples/nvmf/nvmf/nvmf.o 00:03:07.696 CXX test/cpp_headers/nvme.o 00:03:07.955 LINK doorbell_aers 00:03:07.955 CXX test/cpp_headers/nvme_intel.o 00:03:07.955 CXX test/cpp_headers/nvme_ocssd.o 00:03:07.955 LINK fused_ordering 00:03:07.955 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:07.955 CXX test/cpp_headers/nvme_spec.o 00:03:07.955 CXX test/cpp_headers/nvme_zns.o 00:03:07.955 LINK nvme_compliance 00:03:07.955 CXX test/cpp_headers/nvmf_cmd.o 00:03:07.955 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:07.955 LINK nvmf 00:03:08.212 LINK fdp 00:03:08.212 CXX test/cpp_headers/nvmf.o 00:03:08.212 CXX test/cpp_headers/nvmf_spec.o 00:03:08.212 CXX test/cpp_headers/nvmf_transport.o 00:03:08.212 CXX test/cpp_headers/opal.o 00:03:08.212 CXX test/cpp_headers/opal_spec.o 00:03:08.212 CXX test/cpp_headers/pci_ids.o 00:03:08.212 CXX test/cpp_headers/pipe.o 00:03:08.212 CXX test/cpp_headers/queue.o 00:03:08.212 CXX test/cpp_headers/reduce.o 00:03:08.212 CXX test/cpp_headers/rpc.o 00:03:08.212 CXX test/cpp_headers/scheduler.o 00:03:08.212 CXX test/cpp_headers/scsi.o 00:03:08.212 CXX test/cpp_headers/scsi_spec.o 00:03:08.212 CXX test/cpp_headers/sock.o 00:03:08.470 CXX test/cpp_headers/stdinc.o 00:03:08.470 CXX test/cpp_headers/string.o 00:03:08.470 CXX test/cpp_headers/thread.o 00:03:08.470 CXX test/cpp_headers/trace.o 00:03:08.470 CXX test/cpp_headers/trace_parser.o 00:03:08.470 CXX test/cpp_headers/tree.o 00:03:08.470 CXX test/cpp_headers/ublk.o 00:03:08.470 CXX test/cpp_headers/util.o 00:03:08.470 CXX test/cpp_headers/uuid.o 00:03:08.470 CXX test/cpp_headers/version.o 00:03:08.470 CXX test/cpp_headers/vfio_user_pci.o 00:03:08.470 CXX test/cpp_headers/vfio_user_spec.o 00:03:08.470 CXX test/cpp_headers/vhost.o 00:03:08.470 CXX test/cpp_headers/vmd.o 00:03:08.470 CXX test/cpp_headers/xor.o 00:03:08.470 CXX test/cpp_headers/zipf.o 00:03:09.042 LINK cuse 00:03:10.955 LINK esnap 00:03:10.955 00:03:10.955 real 1m6.856s 00:03:10.955 user 6m13.835s 00:03:10.955 sys 1m10.937s 00:03:10.955 12:01:11 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:10.955 12:01:11 make -- common/autotest_common.sh@10 -- $ set +x 00:03:10.955 ************************************ 00:03:10.955 END TEST make 00:03:10.955 ************************************ 00:03:10.955 12:01:11 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:10.955 12:01:11 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:10.955 12:01:11 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:10.955 12:01:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.955 12:01:11 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:10.955 12:01:11 -- pm/common@44 -- $ pid=5065 00:03:10.956 12:01:11 -- pm/common@50 -- $ kill -TERM 5065 00:03:10.956 12:01:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.956 12:01:11 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:10.956 12:01:11 -- pm/common@44 -- $ pid=5066 00:03:10.956 12:01:11 -- pm/common@50 -- $ kill -TERM 5066 00:03:10.956 12:01:11 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:10.956 12:01:11 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:10.956 12:01:11 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:10.956 12:01:11 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:10.956 12:01:11 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:10.956 12:01:12 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:10.956 12:01:12 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:10.956 12:01:12 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:10.956 12:01:12 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:10.956 12:01:12 -- scripts/common.sh@336 -- # IFS=.-: 00:03:10.956 12:01:12 -- scripts/common.sh@336 -- # read -ra ver1 00:03:10.956 12:01:12 -- scripts/common.sh@337 -- # IFS=.-: 00:03:10.956 12:01:12 -- scripts/common.sh@337 -- # read -ra ver2 00:03:10.956 12:01:12 -- scripts/common.sh@338 -- # local 'op=<' 00:03:10.956 12:01:12 -- scripts/common.sh@340 -- # ver1_l=2 00:03:10.956 12:01:12 -- scripts/common.sh@341 -- # ver2_l=1 00:03:10.956 12:01:12 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:10.956 12:01:12 -- scripts/common.sh@344 -- # case "$op" in 00:03:10.956 12:01:12 -- scripts/common.sh@345 -- # : 1 00:03:10.956 12:01:12 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:10.956 12:01:12 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:10.956 12:01:12 -- scripts/common.sh@365 -- # decimal 1 00:03:10.956 12:01:12 -- scripts/common.sh@353 -- # local d=1 00:03:10.956 12:01:12 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:10.956 12:01:12 -- scripts/common.sh@355 -- # echo 1 00:03:10.956 12:01:12 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:10.956 12:01:12 -- scripts/common.sh@366 -- # decimal 2 00:03:10.956 12:01:12 -- scripts/common.sh@353 -- # local d=2 00:03:10.956 12:01:12 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:10.956 12:01:12 -- scripts/common.sh@355 -- # echo 2 00:03:10.956 12:01:12 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:10.956 12:01:12 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:10.956 12:01:12 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:10.956 12:01:12 -- scripts/common.sh@368 -- # return 0 00:03:10.956 12:01:12 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:10.956 12:01:12 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:10.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:10.956 --rc genhtml_branch_coverage=1 00:03:10.956 --rc genhtml_function_coverage=1 00:03:10.956 --rc genhtml_legend=1 00:03:10.956 --rc geninfo_all_blocks=1 00:03:10.956 --rc geninfo_unexecuted_blocks=1 00:03:10.956 00:03:10.956 ' 00:03:10.956 12:01:12 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:10.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:10.956 --rc genhtml_branch_coverage=1 00:03:10.956 --rc genhtml_function_coverage=1 00:03:10.956 --rc genhtml_legend=1 00:03:10.956 --rc geninfo_all_blocks=1 00:03:10.956 --rc geninfo_unexecuted_blocks=1 00:03:10.956 00:03:10.956 ' 00:03:10.956 12:01:12 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:10.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:10.956 --rc genhtml_branch_coverage=1 00:03:10.956 --rc genhtml_function_coverage=1 00:03:10.956 --rc genhtml_legend=1 00:03:10.956 --rc geninfo_all_blocks=1 00:03:10.956 --rc geninfo_unexecuted_blocks=1 00:03:10.956 00:03:10.956 ' 00:03:10.956 12:01:12 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:10.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:10.956 --rc genhtml_branch_coverage=1 00:03:10.956 --rc genhtml_function_coverage=1 00:03:10.956 --rc genhtml_legend=1 00:03:10.956 --rc geninfo_all_blocks=1 00:03:10.956 --rc geninfo_unexecuted_blocks=1 00:03:10.956 00:03:10.956 ' 00:03:10.956 12:01:12 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:10.956 12:01:12 -- nvmf/common.sh@7 -- # uname -s 00:03:10.956 12:01:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:10.956 12:01:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:10.956 12:01:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:10.956 12:01:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:10.956 12:01:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:10.956 12:01:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:10.956 12:01:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:10.956 12:01:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:10.956 12:01:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:10.956 12:01:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:11.217 12:01:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:fc777c45-39d9-4fee-b620-435140e95f34 00:03:11.217 12:01:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=fc777c45-39d9-4fee-b620-435140e95f34 00:03:11.217 12:01:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:11.217 12:01:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:11.217 12:01:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:11.217 12:01:12 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:11.217 12:01:12 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:11.217 12:01:12 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:11.217 12:01:12 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:11.217 12:01:12 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:11.217 12:01:12 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:11.217 12:01:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:11.217 12:01:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:11.217 12:01:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:11.217 12:01:12 -- paths/export.sh@5 -- # export PATH 00:03:11.217 12:01:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:11.217 12:01:12 -- nvmf/common.sh@51 -- # : 0 00:03:11.217 12:01:12 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:11.217 12:01:12 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:11.217 12:01:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:11.217 12:01:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:11.217 12:01:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:11.217 12:01:12 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:11.217 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:11.217 12:01:12 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:11.217 12:01:12 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:11.217 12:01:12 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:11.217 12:01:12 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:11.217 12:01:12 -- spdk/autotest.sh@32 -- # uname -s 00:03:11.217 12:01:12 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:11.217 12:01:12 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:11.217 12:01:12 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:11.217 12:01:12 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:11.217 12:01:12 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:11.217 12:01:12 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:11.217 12:01:12 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:11.217 12:01:12 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:11.218 12:01:12 -- spdk/autotest.sh@48 -- # udevadm_pid=54233 00:03:11.218 12:01:12 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:11.218 12:01:12 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:11.218 12:01:12 -- pm/common@17 -- # local monitor 00:03:11.218 12:01:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:11.218 12:01:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:11.218 12:01:12 -- pm/common@25 -- # sleep 1 00:03:11.218 12:01:12 -- pm/common@21 -- # date +%s 00:03:11.218 12:01:12 -- pm/common@21 -- # date +%s 00:03:11.218 12:01:12 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732536072 00:03:11.218 12:01:12 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732536072 00:03:11.218 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732536072_collect-vmstat.pm.log 00:03:11.218 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732536072_collect-cpu-load.pm.log 00:03:12.162 12:01:13 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:12.162 12:01:13 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:12.162 12:01:13 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:12.162 12:01:13 -- common/autotest_common.sh@10 -- # set +x 00:03:12.162 12:01:13 -- spdk/autotest.sh@59 -- # create_test_list 00:03:12.162 12:01:13 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:12.162 12:01:13 -- common/autotest_common.sh@10 -- # set +x 00:03:12.162 12:01:13 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:12.162 12:01:13 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:12.162 12:01:13 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:12.162 12:01:13 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:12.162 12:01:13 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:12.162 12:01:13 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:12.162 12:01:13 -- common/autotest_common.sh@1457 -- # uname 00:03:12.162 12:01:13 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:12.162 12:01:13 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:12.162 12:01:13 -- common/autotest_common.sh@1477 -- # uname 00:03:12.162 12:01:13 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:12.162 12:01:13 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:12.162 12:01:13 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:12.162 lcov: LCOV version 1.15 00:03:12.162 12:01:13 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:27.097 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:27.097 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:45.225 12:01:43 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:45.225 12:01:43 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:45.225 12:01:43 -- common/autotest_common.sh@10 -- # set +x 00:03:45.225 12:01:43 -- spdk/autotest.sh@78 -- # rm -f 00:03:45.225 12:01:43 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:45.225 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:45.225 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:45.225 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:45.225 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:03:45.225 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:03:45.225 12:01:45 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:45.225 12:01:45 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:45.225 12:01:45 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:45.225 12:01:45 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:03:45.225 12:01:45 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:45.225 12:01:45 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:03:45.225 12:01:45 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:45.225 12:01:45 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:45.225 12:01:45 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:45.225 12:01:45 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:45.225 12:01:45 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:03:45.225 12:01:45 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:03:45.225 12:01:45 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:45.225 12:01:45 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:45.225 12:01:45 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:45.225 12:01:45 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:03:45.225 12:01:45 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:03:45.225 12:01:45 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:45.225 12:01:45 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:45.225 12:01:45 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:45.225 12:01:45 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:03:45.225 12:01:45 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:03:45.225 12:01:45 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:03:45.225 12:01:45 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:45.225 12:01:45 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:45.225 12:01:45 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:03:45.225 12:01:45 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:03:45.225 12:01:45 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:03:45.225 12:01:45 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:45.225 12:01:45 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:45.225 12:01:45 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:03:45.226 12:01:45 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:03:45.226 12:01:45 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:03:45.226 12:01:45 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:45.226 12:01:45 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:45.226 12:01:45 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:03:45.226 12:01:45 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:03:45.226 12:01:45 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:03:45.226 12:01:45 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:45.226 12:01:45 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:45.226 12:01:45 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:45.226 12:01:45 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:45.226 12:01:45 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:45.226 12:01:45 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:45.226 12:01:45 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:45.226 No valid GPT data, bailing 00:03:45.226 12:01:45 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:45.226 12:01:45 -- scripts/common.sh@394 -- # pt= 00:03:45.226 12:01:45 -- scripts/common.sh@395 -- # return 1 00:03:45.226 12:01:45 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:45.226 1+0 records in 00:03:45.226 1+0 records out 00:03:45.226 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0296701 s, 35.3 MB/s 00:03:45.226 12:01:45 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:45.226 12:01:45 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:45.226 12:01:45 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:03:45.226 12:01:45 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:03:45.226 12:01:45 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:45.226 No valid GPT data, bailing 00:03:45.226 12:01:45 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:45.226 12:01:45 -- scripts/common.sh@394 -- # pt= 00:03:45.226 12:01:45 -- scripts/common.sh@395 -- # return 1 00:03:45.226 12:01:45 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:45.226 1+0 records in 00:03:45.226 1+0 records out 00:03:45.226 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00616584 s, 170 MB/s 00:03:45.226 12:01:45 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:45.226 12:01:45 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:45.226 12:01:45 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:03:45.226 12:01:45 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:03:45.226 12:01:45 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:03:45.226 No valid GPT data, bailing 00:03:45.226 12:01:45 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:03:45.226 12:01:45 -- scripts/common.sh@394 -- # pt= 00:03:45.226 12:01:45 -- scripts/common.sh@395 -- # return 1 00:03:45.226 12:01:45 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:03:45.226 1+0 records in 00:03:45.226 1+0 records out 00:03:45.226 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00616098 s, 170 MB/s 00:03:45.226 12:01:45 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:45.226 12:01:45 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:45.226 12:01:45 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:03:45.226 12:01:45 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:03:45.226 12:01:45 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:03:45.226 No valid GPT data, bailing 00:03:45.226 12:01:45 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:03:45.226 12:01:45 -- scripts/common.sh@394 -- # pt= 00:03:45.226 12:01:45 -- scripts/common.sh@395 -- # return 1 00:03:45.226 12:01:45 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:03:45.226 1+0 records in 00:03:45.226 1+0 records out 00:03:45.226 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00509382 s, 206 MB/s 00:03:45.226 12:01:45 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:45.226 12:01:45 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:45.226 12:01:45 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:03:45.226 12:01:45 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:03:45.226 12:01:45 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:03:45.226 No valid GPT data, bailing 00:03:45.226 12:01:45 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:03:45.226 12:01:45 -- scripts/common.sh@394 -- # pt= 00:03:45.226 12:01:45 -- scripts/common.sh@395 -- # return 1 00:03:45.226 12:01:45 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:03:45.226 1+0 records in 00:03:45.226 1+0 records out 00:03:45.226 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00529224 s, 198 MB/s 00:03:45.226 12:01:45 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:45.226 12:01:45 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:45.226 12:01:45 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:03:45.226 12:01:45 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:03:45.226 12:01:45 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:03:45.226 No valid GPT data, bailing 00:03:45.226 12:01:45 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:03:45.226 12:01:45 -- scripts/common.sh@394 -- # pt= 00:03:45.226 12:01:45 -- scripts/common.sh@395 -- # return 1 00:03:45.226 12:01:45 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:03:45.226 1+0 records in 00:03:45.226 1+0 records out 00:03:45.226 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00639757 s, 164 MB/s 00:03:45.226 12:01:45 -- spdk/autotest.sh@105 -- # sync 00:03:45.226 12:01:45 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:45.226 12:01:45 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:45.226 12:01:45 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:46.611 12:01:47 -- spdk/autotest.sh@111 -- # uname -s 00:03:46.611 12:01:47 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:46.611 12:01:47 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:46.611 12:01:47 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:46.870 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:47.443 Hugepages 00:03:47.443 node hugesize free / total 00:03:47.443 node0 1048576kB 0 / 0 00:03:47.443 node0 2048kB 0 / 0 00:03:47.443 00:03:47.443 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:47.443 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:47.443 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:47.443 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:03:47.817 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:03:47.817 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:03:47.817 12:01:48 -- spdk/autotest.sh@117 -- # uname -s 00:03:47.817 12:01:48 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:47.817 12:01:48 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:47.817 12:01:48 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:48.079 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:49.030 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:49.030 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:49.030 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:03:49.030 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:03:49.030 12:01:49 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:49.972 12:01:50 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:49.972 12:01:50 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:49.972 12:01:50 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:49.972 12:01:50 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:49.972 12:01:50 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:49.972 12:01:50 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:49.972 12:01:50 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:49.972 12:01:50 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:49.972 12:01:50 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:49.972 12:01:50 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:03:49.972 12:01:50 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:03:49.972 12:01:50 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:50.233 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:50.494 Waiting for block devices as requested 00:03:50.494 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:03:50.756 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:03:50.756 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:03:50.756 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:03:56.050 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:03:56.050 12:01:56 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:56.050 12:01:56 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:03:56.050 12:01:56 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:03:56.050 12:01:56 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:03:56.050 12:01:56 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:03:56.050 12:01:56 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:03:56.050 12:01:56 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:03:56.050 12:01:56 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:03:56.050 12:01:56 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:03:56.050 12:01:56 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:03:56.050 12:01:56 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:03:56.050 12:01:56 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:56.050 12:01:56 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:56.050 12:01:56 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:03:56.050 12:01:56 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:56.050 12:01:56 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:56.050 12:01:56 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:03:56.050 12:01:56 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:56.050 12:01:56 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:56.050 12:01:56 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:56.050 12:01:56 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:56.050 12:01:56 -- common/autotest_common.sh@1543 -- # continue 00:03:56.050 12:01:56 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:56.050 12:01:56 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:03:56.050 12:01:56 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:03:56.050 12:01:56 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:03:56.050 12:01:56 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:03:56.050 12:01:56 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:03:56.050 12:01:56 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:03:56.050 12:01:56 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:56.050 12:01:56 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:56.050 12:01:56 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:56.050 12:01:56 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:56.050 12:01:56 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:56.050 12:01:56 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:56.050 12:01:56 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:03:56.050 12:01:56 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:56.050 12:01:56 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:56.050 12:01:56 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:56.050 12:01:56 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:56.050 12:01:56 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:56.050 12:01:56 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:56.050 12:01:56 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:56.050 12:01:56 -- common/autotest_common.sh@1543 -- # continue 00:03:56.050 12:01:56 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:56.050 12:01:56 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:03:56.050 12:01:56 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:03:56.050 12:01:56 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:03:56.050 12:01:56 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:03:56.050 12:01:56 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:03:56.050 12:01:56 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:03:56.050 12:01:56 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:03:56.050 12:01:56 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:03:56.050 12:01:56 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:03:56.050 12:01:56 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:56.050 12:01:56 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:56.050 12:01:56 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:03:56.050 12:01:56 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:03:56.050 12:01:56 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:56.050 12:01:56 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:56.050 12:01:56 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:56.050 12:01:56 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:03:56.050 12:01:56 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:56.051 12:01:56 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:56.051 12:01:56 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:56.051 12:01:56 -- common/autotest_common.sh@1543 -- # continue 00:03:56.051 12:01:56 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:56.051 12:01:56 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:03:56.051 12:01:56 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:03:56.051 12:01:56 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:03:56.051 12:01:56 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:03:56.051 12:01:56 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:03:56.051 12:01:56 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:03:56.051 12:01:56 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:03:56.051 12:01:56 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:03:56.051 12:01:56 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:03:56.051 12:01:56 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:03:56.051 12:01:56 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:56.051 12:01:56 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:56.051 12:01:56 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:03:56.051 12:01:56 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:56.051 12:01:56 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:56.051 12:01:56 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:03:56.051 12:01:56 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:56.051 12:01:56 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:56.051 12:01:56 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:56.051 12:01:56 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:56.051 12:01:56 -- common/autotest_common.sh@1543 -- # continue 00:03:56.051 12:01:56 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:56.051 12:01:56 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:56.051 12:01:56 -- common/autotest_common.sh@10 -- # set +x 00:03:56.051 12:01:57 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:56.051 12:01:57 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:56.051 12:01:57 -- common/autotest_common.sh@10 -- # set +x 00:03:56.051 12:01:57 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:56.625 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:57.197 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:03:57.197 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:57.197 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:57.197 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:03:57.458 12:01:58 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:57.458 12:01:58 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:57.458 12:01:58 -- common/autotest_common.sh@10 -- # set +x 00:03:57.458 12:01:58 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:57.458 12:01:58 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:03:57.458 12:01:58 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:03:57.458 12:01:58 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:57.458 12:01:58 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:03:57.458 12:01:58 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:03:57.458 12:01:58 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:03:57.458 12:01:58 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:57.458 12:01:58 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:57.458 12:01:58 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:57.458 12:01:58 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:57.458 12:01:58 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:57.458 12:01:58 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:57.458 12:01:58 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:03:57.458 12:01:58 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:03:57.458 12:01:58 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:57.458 12:01:58 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:03:57.458 12:01:58 -- common/autotest_common.sh@1566 -- # device=0x0010 00:03:57.458 12:01:58 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:57.458 12:01:58 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:57.458 12:01:58 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:03:57.458 12:01:58 -- common/autotest_common.sh@1566 -- # device=0x0010 00:03:57.458 12:01:58 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:57.458 12:01:58 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:57.458 12:01:58 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:03:57.458 12:01:58 -- common/autotest_common.sh@1566 -- # device=0x0010 00:03:57.458 12:01:58 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:57.458 12:01:58 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:57.458 12:01:58 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:03:57.458 12:01:58 -- common/autotest_common.sh@1566 -- # device=0x0010 00:03:57.458 12:01:58 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:57.458 12:01:58 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:03:57.458 12:01:58 -- common/autotest_common.sh@1572 -- # return 0 00:03:57.458 12:01:58 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:03:57.458 12:01:58 -- common/autotest_common.sh@1580 -- # return 0 00:03:57.458 12:01:58 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:57.458 12:01:58 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:57.458 12:01:58 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:57.458 12:01:58 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:57.458 12:01:58 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:57.458 12:01:58 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:57.458 12:01:58 -- common/autotest_common.sh@10 -- # set +x 00:03:57.458 12:01:58 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:57.458 12:01:58 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:57.458 12:01:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:57.458 12:01:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:57.458 12:01:58 -- common/autotest_common.sh@10 -- # set +x 00:03:57.458 ************************************ 00:03:57.458 START TEST env 00:03:57.458 ************************************ 00:03:57.458 12:01:58 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:57.458 * Looking for test storage... 00:03:57.720 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:03:57.720 12:01:58 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:57.720 12:01:58 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:57.720 12:01:58 env -- common/autotest_common.sh@1693 -- # lcov --version 00:03:57.720 12:01:58 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:57.720 12:01:58 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:57.720 12:01:58 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:57.720 12:01:58 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:57.720 12:01:58 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:57.720 12:01:58 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:57.720 12:01:58 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:57.720 12:01:58 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:57.720 12:01:58 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:57.720 12:01:58 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:57.720 12:01:58 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:57.720 12:01:58 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:57.720 12:01:58 env -- scripts/common.sh@344 -- # case "$op" in 00:03:57.720 12:01:58 env -- scripts/common.sh@345 -- # : 1 00:03:57.720 12:01:58 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:57.720 12:01:58 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:57.720 12:01:58 env -- scripts/common.sh@365 -- # decimal 1 00:03:57.720 12:01:58 env -- scripts/common.sh@353 -- # local d=1 00:03:57.720 12:01:58 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:57.720 12:01:58 env -- scripts/common.sh@355 -- # echo 1 00:03:57.720 12:01:58 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:57.720 12:01:58 env -- scripts/common.sh@366 -- # decimal 2 00:03:57.720 12:01:58 env -- scripts/common.sh@353 -- # local d=2 00:03:57.720 12:01:58 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:57.720 12:01:58 env -- scripts/common.sh@355 -- # echo 2 00:03:57.720 12:01:58 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:57.720 12:01:58 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:57.720 12:01:58 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:57.720 12:01:58 env -- scripts/common.sh@368 -- # return 0 00:03:57.720 12:01:58 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:57.720 12:01:58 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:57.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.720 --rc genhtml_branch_coverage=1 00:03:57.720 --rc genhtml_function_coverage=1 00:03:57.720 --rc genhtml_legend=1 00:03:57.720 --rc geninfo_all_blocks=1 00:03:57.720 --rc geninfo_unexecuted_blocks=1 00:03:57.720 00:03:57.720 ' 00:03:57.720 12:01:58 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:57.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.720 --rc genhtml_branch_coverage=1 00:03:57.720 --rc genhtml_function_coverage=1 00:03:57.720 --rc genhtml_legend=1 00:03:57.720 --rc geninfo_all_blocks=1 00:03:57.720 --rc geninfo_unexecuted_blocks=1 00:03:57.720 00:03:57.720 ' 00:03:57.720 12:01:58 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:57.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.720 --rc genhtml_branch_coverage=1 00:03:57.720 --rc genhtml_function_coverage=1 00:03:57.720 --rc genhtml_legend=1 00:03:57.720 --rc geninfo_all_blocks=1 00:03:57.720 --rc geninfo_unexecuted_blocks=1 00:03:57.720 00:03:57.720 ' 00:03:57.720 12:01:58 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:57.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.720 --rc genhtml_branch_coverage=1 00:03:57.720 --rc genhtml_function_coverage=1 00:03:57.720 --rc genhtml_legend=1 00:03:57.720 --rc geninfo_all_blocks=1 00:03:57.720 --rc geninfo_unexecuted_blocks=1 00:03:57.720 00:03:57.720 ' 00:03:57.720 12:01:58 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:57.720 12:01:58 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:57.720 12:01:58 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:57.720 12:01:58 env -- common/autotest_common.sh@10 -- # set +x 00:03:57.720 ************************************ 00:03:57.720 START TEST env_memory 00:03:57.720 ************************************ 00:03:57.720 12:01:58 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:57.720 00:03:57.720 00:03:57.720 CUnit - A unit testing framework for C - Version 2.1-3 00:03:57.720 http://cunit.sourceforge.net/ 00:03:57.720 00:03:57.720 00:03:57.720 Suite: memory 00:03:57.720 Test: alloc and free memory map ...[2024-11-25 12:01:58.693694] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:57.720 passed 00:03:57.720 Test: mem map translation ...[2024-11-25 12:01:58.733403] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:57.720 [2024-11-25 12:01:58.733623] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:57.720 [2024-11-25 12:01:58.734049] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:57.720 [2024-11-25 12:01:58.734070] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:57.720 passed 00:03:57.988 Test: mem map registration ...[2024-11-25 12:01:58.802937] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:57.988 [2024-11-25 12:01:58.803158] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:57.988 passed 00:03:57.988 Test: mem map adjacent registrations ...passed 00:03:57.988 00:03:57.988 Run Summary: Type Total Ran Passed Failed Inactive 00:03:57.988 suites 1 1 n/a 0 0 00:03:57.988 tests 4 4 4 0 0 00:03:57.988 asserts 152 152 152 0 n/a 00:03:57.988 00:03:57.988 Elapsed time = 0.233 seconds 00:03:57.988 00:03:57.988 real 0m0.277s 00:03:57.988 user 0m0.236s 00:03:57.988 sys 0m0.027s 00:03:57.988 12:01:58 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:57.988 12:01:58 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:57.988 ************************************ 00:03:57.988 END TEST env_memory 00:03:57.988 ************************************ 00:03:57.988 12:01:58 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:57.988 12:01:58 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:57.988 12:01:58 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:57.988 12:01:58 env -- common/autotest_common.sh@10 -- # set +x 00:03:57.988 ************************************ 00:03:57.988 START TEST env_vtophys 00:03:57.988 ************************************ 00:03:57.988 12:01:58 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:57.988 EAL: lib.eal log level changed from notice to debug 00:03:57.988 EAL: Detected lcore 0 as core 0 on socket 0 00:03:57.988 EAL: Detected lcore 1 as core 0 on socket 0 00:03:57.988 EAL: Detected lcore 2 as core 0 on socket 0 00:03:57.988 EAL: Detected lcore 3 as core 0 on socket 0 00:03:57.988 EAL: Detected lcore 4 as core 0 on socket 0 00:03:57.988 EAL: Detected lcore 5 as core 0 on socket 0 00:03:57.988 EAL: Detected lcore 6 as core 0 on socket 0 00:03:57.988 EAL: Detected lcore 7 as core 0 on socket 0 00:03:57.988 EAL: Detected lcore 8 as core 0 on socket 0 00:03:57.988 EAL: Detected lcore 9 as core 0 on socket 0 00:03:57.988 EAL: Maximum logical cores by configuration: 128 00:03:57.988 EAL: Detected CPU lcores: 10 00:03:57.988 EAL: Detected NUMA nodes: 1 00:03:57.988 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:57.988 EAL: Detected shared linkage of DPDK 00:03:57.988 EAL: No shared files mode enabled, IPC will be disabled 00:03:57.988 EAL: Selected IOVA mode 'PA' 00:03:57.988 EAL: Probing VFIO support... 00:03:57.988 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:03:57.988 EAL: VFIO modules not loaded, skipping VFIO support... 00:03:57.988 EAL: Ask a virtual area of 0x2e000 bytes 00:03:57.988 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:57.988 EAL: Setting up physically contiguous memory... 00:03:57.988 EAL: Setting maximum number of open files to 524288 00:03:57.988 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:57.988 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:57.988 EAL: Ask a virtual area of 0x61000 bytes 00:03:57.988 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:57.988 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:57.988 EAL: Ask a virtual area of 0x400000000 bytes 00:03:57.988 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:57.988 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:57.988 EAL: Ask a virtual area of 0x61000 bytes 00:03:57.988 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:57.988 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:57.988 EAL: Ask a virtual area of 0x400000000 bytes 00:03:57.988 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:57.988 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:57.988 EAL: Ask a virtual area of 0x61000 bytes 00:03:57.988 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:57.988 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:57.988 EAL: Ask a virtual area of 0x400000000 bytes 00:03:57.988 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:57.988 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:57.988 EAL: Ask a virtual area of 0x61000 bytes 00:03:57.988 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:57.988 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:57.989 EAL: Ask a virtual area of 0x400000000 bytes 00:03:57.989 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:57.989 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:57.989 EAL: Hugepages will be freed exactly as allocated. 00:03:57.989 EAL: No shared files mode enabled, IPC is disabled 00:03:57.989 EAL: No shared files mode enabled, IPC is disabled 00:03:58.250 EAL: TSC frequency is ~2600000 KHz 00:03:58.250 EAL: Main lcore 0 is ready (tid=7fb735833a40;cpuset=[0]) 00:03:58.250 EAL: Trying to obtain current memory policy. 00:03:58.250 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:58.250 EAL: Restoring previous memory policy: 0 00:03:58.250 EAL: request: mp_malloc_sync 00:03:58.250 EAL: No shared files mode enabled, IPC is disabled 00:03:58.251 EAL: Heap on socket 0 was expanded by 2MB 00:03:58.251 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:03:58.251 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:58.251 EAL: Mem event callback 'spdk:(nil)' registered 00:03:58.251 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:03:58.251 00:03:58.251 00:03:58.251 CUnit - A unit testing framework for C - Version 2.1-3 00:03:58.251 http://cunit.sourceforge.net/ 00:03:58.251 00:03:58.251 00:03:58.251 Suite: components_suite 00:03:58.512 Test: vtophys_malloc_test ...passed 00:03:58.512 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:58.512 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:58.512 EAL: Restoring previous memory policy: 4 00:03:58.512 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.512 EAL: request: mp_malloc_sync 00:03:58.512 EAL: No shared files mode enabled, IPC is disabled 00:03:58.512 EAL: Heap on socket 0 was expanded by 4MB 00:03:58.773 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.773 EAL: request: mp_malloc_sync 00:03:58.773 EAL: No shared files mode enabled, IPC is disabled 00:03:58.773 EAL: Heap on socket 0 was shrunk by 4MB 00:03:58.773 EAL: Trying to obtain current memory policy. 00:03:58.773 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:58.773 EAL: Restoring previous memory policy: 4 00:03:58.773 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.773 EAL: request: mp_malloc_sync 00:03:58.773 EAL: No shared files mode enabled, IPC is disabled 00:03:58.774 EAL: Heap on socket 0 was expanded by 6MB 00:03:58.774 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.774 EAL: request: mp_malloc_sync 00:03:58.774 EAL: No shared files mode enabled, IPC is disabled 00:03:58.774 EAL: Heap on socket 0 was shrunk by 6MB 00:03:58.774 EAL: Trying to obtain current memory policy. 00:03:58.774 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:58.774 EAL: Restoring previous memory policy: 4 00:03:58.774 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.774 EAL: request: mp_malloc_sync 00:03:58.774 EAL: No shared files mode enabled, IPC is disabled 00:03:58.774 EAL: Heap on socket 0 was expanded by 10MB 00:03:58.774 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.774 EAL: request: mp_malloc_sync 00:03:58.774 EAL: No shared files mode enabled, IPC is disabled 00:03:58.774 EAL: Heap on socket 0 was shrunk by 10MB 00:03:58.774 EAL: Trying to obtain current memory policy. 00:03:58.774 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:58.774 EAL: Restoring previous memory policy: 4 00:03:58.774 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.774 EAL: request: mp_malloc_sync 00:03:58.774 EAL: No shared files mode enabled, IPC is disabled 00:03:58.774 EAL: Heap on socket 0 was expanded by 18MB 00:03:58.774 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.774 EAL: request: mp_malloc_sync 00:03:58.774 EAL: No shared files mode enabled, IPC is disabled 00:03:58.774 EAL: Heap on socket 0 was shrunk by 18MB 00:03:58.774 EAL: Trying to obtain current memory policy. 00:03:58.774 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:58.774 EAL: Restoring previous memory policy: 4 00:03:58.774 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.774 EAL: request: mp_malloc_sync 00:03:58.774 EAL: No shared files mode enabled, IPC is disabled 00:03:58.774 EAL: Heap on socket 0 was expanded by 34MB 00:03:58.774 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.774 EAL: request: mp_malloc_sync 00:03:58.774 EAL: No shared files mode enabled, IPC is disabled 00:03:58.774 EAL: Heap on socket 0 was shrunk by 34MB 00:03:58.774 EAL: Trying to obtain current memory policy. 00:03:58.774 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:58.774 EAL: Restoring previous memory policy: 4 00:03:58.774 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.774 EAL: request: mp_malloc_sync 00:03:58.774 EAL: No shared files mode enabled, IPC is disabled 00:03:58.774 EAL: Heap on socket 0 was expanded by 66MB 00:03:59.036 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.036 EAL: request: mp_malloc_sync 00:03:59.036 EAL: No shared files mode enabled, IPC is disabled 00:03:59.036 EAL: Heap on socket 0 was shrunk by 66MB 00:03:59.036 EAL: Trying to obtain current memory policy. 00:03:59.036 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:59.036 EAL: Restoring previous memory policy: 4 00:03:59.036 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.036 EAL: request: mp_malloc_sync 00:03:59.036 EAL: No shared files mode enabled, IPC is disabled 00:03:59.036 EAL: Heap on socket 0 was expanded by 130MB 00:03:59.298 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.298 EAL: request: mp_malloc_sync 00:03:59.298 EAL: No shared files mode enabled, IPC is disabled 00:03:59.298 EAL: Heap on socket 0 was shrunk by 130MB 00:03:59.298 EAL: Trying to obtain current memory policy. 00:03:59.298 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:59.560 EAL: Restoring previous memory policy: 4 00:03:59.560 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.560 EAL: request: mp_malloc_sync 00:03:59.560 EAL: No shared files mode enabled, IPC is disabled 00:03:59.560 EAL: Heap on socket 0 was expanded by 258MB 00:03:59.822 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.822 EAL: request: mp_malloc_sync 00:03:59.822 EAL: No shared files mode enabled, IPC is disabled 00:03:59.822 EAL: Heap on socket 0 was shrunk by 258MB 00:04:00.083 EAL: Trying to obtain current memory policy. 00:04:00.083 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.344 EAL: Restoring previous memory policy: 4 00:04:00.344 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.344 EAL: request: mp_malloc_sync 00:04:00.344 EAL: No shared files mode enabled, IPC is disabled 00:04:00.344 EAL: Heap on socket 0 was expanded by 514MB 00:04:00.917 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.178 EAL: request: mp_malloc_sync 00:04:01.178 EAL: No shared files mode enabled, IPC is disabled 00:04:01.178 EAL: Heap on socket 0 was shrunk by 514MB 00:04:01.752 EAL: Trying to obtain current memory policy. 00:04:01.752 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:02.013 EAL: Restoring previous memory policy: 4 00:04:02.013 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.013 EAL: request: mp_malloc_sync 00:04:02.013 EAL: No shared files mode enabled, IPC is disabled 00:04:02.013 EAL: Heap on socket 0 was expanded by 1026MB 00:04:03.397 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.397 EAL: request: mp_malloc_sync 00:04:03.397 EAL: No shared files mode enabled, IPC is disabled 00:04:03.397 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:04.337 passed 00:04:04.337 00:04:04.337 Run Summary: Type Total Ran Passed Failed Inactive 00:04:04.337 suites 1 1 n/a 0 0 00:04:04.337 tests 2 2 2 0 0 00:04:04.337 asserts 5677 5677 5677 0 n/a 00:04:04.337 00:04:04.337 Elapsed time = 5.933 seconds 00:04:04.337 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.337 EAL: request: mp_malloc_sync 00:04:04.337 EAL: No shared files mode enabled, IPC is disabled 00:04:04.337 EAL: Heap on socket 0 was shrunk by 2MB 00:04:04.337 EAL: No shared files mode enabled, IPC is disabled 00:04:04.337 EAL: No shared files mode enabled, IPC is disabled 00:04:04.337 EAL: No shared files mode enabled, IPC is disabled 00:04:04.337 00:04:04.337 real 0m6.219s 00:04:04.337 user 0m4.894s 00:04:04.337 sys 0m1.151s 00:04:04.337 12:02:05 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:04.337 ************************************ 00:04:04.337 END TEST env_vtophys 00:04:04.337 ************************************ 00:04:04.337 12:02:05 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:04.337 12:02:05 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:04.337 12:02:05 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:04.337 12:02:05 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:04.337 12:02:05 env -- common/autotest_common.sh@10 -- # set +x 00:04:04.337 ************************************ 00:04:04.337 START TEST env_pci 00:04:04.337 ************************************ 00:04:04.337 12:02:05 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:04.337 00:04:04.337 00:04:04.337 CUnit - A unit testing framework for C - Version 2.1-3 00:04:04.337 http://cunit.sourceforge.net/ 00:04:04.337 00:04:04.337 00:04:04.337 Suite: pci 00:04:04.337 Test: pci_hook ...[2024-11-25 12:02:05.257590] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57043 has claimed it 00:04:04.337 passed 00:04:04.337 00:04:04.337 Run Summary: Type Total Ran Passed Failed Inactive 00:04:04.337 suites 1 1 n/a 0 0 00:04:04.337 tests 1 1 1 0 0 00:04:04.337 asserts 25 25 25 0 n/a 00:04:04.337 00:04:04.337 Elapsed time = 0.004 seconds 00:04:04.337 EAL: Cannot find device (10000:00:01.0) 00:04:04.337 EAL: Failed to attach device on primary process 00:04:04.338 ************************************ 00:04:04.338 END TEST env_pci 00:04:04.338 ************************************ 00:04:04.338 00:04:04.338 real 0m0.057s 00:04:04.338 user 0m0.023s 00:04:04.338 sys 0m0.032s 00:04:04.338 12:02:05 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:04.338 12:02:05 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:04.338 12:02:05 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:04.338 12:02:05 env -- env/env.sh@15 -- # uname 00:04:04.338 12:02:05 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:04.338 12:02:05 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:04.338 12:02:05 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:04.338 12:02:05 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:04.338 12:02:05 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:04.338 12:02:05 env -- common/autotest_common.sh@10 -- # set +x 00:04:04.338 ************************************ 00:04:04.338 START TEST env_dpdk_post_init 00:04:04.338 ************************************ 00:04:04.338 12:02:05 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:04.338 EAL: Detected CPU lcores: 10 00:04:04.338 EAL: Detected NUMA nodes: 1 00:04:04.338 EAL: Detected shared linkage of DPDK 00:04:04.338 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:04.338 EAL: Selected IOVA mode 'PA' 00:04:04.596 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:04.596 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:04.596 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:04.596 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:04:04.596 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:04:04.596 Starting DPDK initialization... 00:04:04.596 Starting SPDK post initialization... 00:04:04.596 SPDK NVMe probe 00:04:04.596 Attaching to 0000:00:10.0 00:04:04.596 Attaching to 0000:00:11.0 00:04:04.596 Attaching to 0000:00:12.0 00:04:04.596 Attaching to 0000:00:13.0 00:04:04.596 Attached to 0000:00:10.0 00:04:04.596 Attached to 0000:00:11.0 00:04:04.596 Attached to 0000:00:13.0 00:04:04.596 Attached to 0000:00:12.0 00:04:04.596 Cleaning up... 00:04:04.596 00:04:04.596 real 0m0.240s 00:04:04.596 user 0m0.069s 00:04:04.596 sys 0m0.074s 00:04:04.596 ************************************ 00:04:04.596 END TEST env_dpdk_post_init 00:04:04.596 ************************************ 00:04:04.596 12:02:05 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:04.596 12:02:05 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:04.596 12:02:05 env -- env/env.sh@26 -- # uname 00:04:04.596 12:02:05 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:04.596 12:02:05 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:04.596 12:02:05 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:04.596 12:02:05 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:04.596 12:02:05 env -- common/autotest_common.sh@10 -- # set +x 00:04:04.596 ************************************ 00:04:04.596 START TEST env_mem_callbacks 00:04:04.596 ************************************ 00:04:04.596 12:02:05 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:04.854 EAL: Detected CPU lcores: 10 00:04:04.854 EAL: Detected NUMA nodes: 1 00:04:04.854 EAL: Detected shared linkage of DPDK 00:04:04.854 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:04.854 EAL: Selected IOVA mode 'PA' 00:04:04.854 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:04.854 00:04:04.854 00:04:04.854 CUnit - A unit testing framework for C - Version 2.1-3 00:04:04.854 http://cunit.sourceforge.net/ 00:04:04.854 00:04:04.854 00:04:04.854 Suite: memory 00:04:04.854 Test: test ... 00:04:04.854 register 0x200000200000 2097152 00:04:04.854 malloc 3145728 00:04:04.854 register 0x200000400000 4194304 00:04:04.854 buf 0x2000004fffc0 len 3145728 PASSED 00:04:04.854 malloc 64 00:04:04.854 buf 0x2000004ffec0 len 64 PASSED 00:04:04.854 malloc 4194304 00:04:04.854 register 0x200000800000 6291456 00:04:04.854 buf 0x2000009fffc0 len 4194304 PASSED 00:04:04.854 free 0x2000004fffc0 3145728 00:04:04.854 free 0x2000004ffec0 64 00:04:04.854 unregister 0x200000400000 4194304 PASSED 00:04:04.854 free 0x2000009fffc0 4194304 00:04:04.854 unregister 0x200000800000 6291456 PASSED 00:04:04.854 malloc 8388608 00:04:04.854 register 0x200000400000 10485760 00:04:04.854 buf 0x2000005fffc0 len 8388608 PASSED 00:04:04.854 free 0x2000005fffc0 8388608 00:04:04.854 unregister 0x200000400000 10485760 PASSED 00:04:04.854 passed 00:04:04.854 00:04:04.854 Run Summary: Type Total Ran Passed Failed Inactive 00:04:04.854 suites 1 1 n/a 0 0 00:04:04.854 tests 1 1 1 0 0 00:04:04.854 asserts 15 15 15 0 n/a 00:04:04.854 00:04:04.854 Elapsed time = 0.047 seconds 00:04:04.854 00:04:04.854 real 0m0.218s 00:04:04.854 user 0m0.065s 00:04:04.854 sys 0m0.050s 00:04:04.854 12:02:05 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:04.854 12:02:05 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:04.854 ************************************ 00:04:04.854 END TEST env_mem_callbacks 00:04:04.854 ************************************ 00:04:04.854 00:04:04.854 real 0m7.452s 00:04:04.854 user 0m5.444s 00:04:04.854 sys 0m1.567s 00:04:04.854 12:02:05 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:04.854 12:02:05 env -- common/autotest_common.sh@10 -- # set +x 00:04:04.854 ************************************ 00:04:04.854 END TEST env 00:04:04.854 ************************************ 00:04:05.113 12:02:05 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:05.113 12:02:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:05.113 12:02:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:05.113 12:02:05 -- common/autotest_common.sh@10 -- # set +x 00:04:05.113 ************************************ 00:04:05.113 START TEST rpc 00:04:05.113 ************************************ 00:04:05.113 12:02:05 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:05.113 * Looking for test storage... 00:04:05.113 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:05.113 12:02:06 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:05.113 12:02:06 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:05.113 12:02:06 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:05.113 12:02:06 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:05.113 12:02:06 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:05.113 12:02:06 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:05.113 12:02:06 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:05.113 12:02:06 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:05.113 12:02:06 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:05.113 12:02:06 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:05.113 12:02:06 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:05.113 12:02:06 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:05.113 12:02:06 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:05.113 12:02:06 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:05.113 12:02:06 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:05.113 12:02:06 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:05.113 12:02:06 rpc -- scripts/common.sh@345 -- # : 1 00:04:05.113 12:02:06 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:05.113 12:02:06 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:05.113 12:02:06 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:05.113 12:02:06 rpc -- scripts/common.sh@353 -- # local d=1 00:04:05.113 12:02:06 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:05.113 12:02:06 rpc -- scripts/common.sh@355 -- # echo 1 00:04:05.113 12:02:06 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:05.113 12:02:06 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:05.113 12:02:06 rpc -- scripts/common.sh@353 -- # local d=2 00:04:05.113 12:02:06 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:05.113 12:02:06 rpc -- scripts/common.sh@355 -- # echo 2 00:04:05.113 12:02:06 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:05.113 12:02:06 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:05.113 12:02:06 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:05.113 12:02:06 rpc -- scripts/common.sh@368 -- # return 0 00:04:05.113 12:02:06 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:05.113 12:02:06 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:05.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.113 --rc genhtml_branch_coverage=1 00:04:05.113 --rc genhtml_function_coverage=1 00:04:05.113 --rc genhtml_legend=1 00:04:05.113 --rc geninfo_all_blocks=1 00:04:05.113 --rc geninfo_unexecuted_blocks=1 00:04:05.113 00:04:05.113 ' 00:04:05.113 12:02:06 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:05.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.113 --rc genhtml_branch_coverage=1 00:04:05.113 --rc genhtml_function_coverage=1 00:04:05.113 --rc genhtml_legend=1 00:04:05.113 --rc geninfo_all_blocks=1 00:04:05.113 --rc geninfo_unexecuted_blocks=1 00:04:05.113 00:04:05.113 ' 00:04:05.113 12:02:06 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:05.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.113 --rc genhtml_branch_coverage=1 00:04:05.113 --rc genhtml_function_coverage=1 00:04:05.113 --rc genhtml_legend=1 00:04:05.113 --rc geninfo_all_blocks=1 00:04:05.113 --rc geninfo_unexecuted_blocks=1 00:04:05.113 00:04:05.113 ' 00:04:05.113 12:02:06 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:05.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.113 --rc genhtml_branch_coverage=1 00:04:05.113 --rc genhtml_function_coverage=1 00:04:05.113 --rc genhtml_legend=1 00:04:05.113 --rc geninfo_all_blocks=1 00:04:05.113 --rc geninfo_unexecuted_blocks=1 00:04:05.113 00:04:05.113 ' 00:04:05.113 12:02:06 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57170 00:04:05.113 12:02:06 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:05.113 12:02:06 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57170 00:04:05.113 12:02:06 rpc -- common/autotest_common.sh@835 -- # '[' -z 57170 ']' 00:04:05.113 12:02:06 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:05.113 12:02:06 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:05.113 12:02:06 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:05.113 12:02:06 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:05.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:05.113 12:02:06 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:05.113 12:02:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:05.113 [2024-11-25 12:02:06.163996] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:04:05.113 [2024-11-25 12:02:06.164257] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57170 ] 00:04:05.372 [2024-11-25 12:02:06.322520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:05.372 [2024-11-25 12:02:06.416065] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:05.372 [2024-11-25 12:02:06.416110] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57170' to capture a snapshot of events at runtime. 00:04:05.372 [2024-11-25 12:02:06.416120] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:05.372 [2024-11-25 12:02:06.416129] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:05.372 [2024-11-25 12:02:06.416136] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57170 for offline analysis/debug. 00:04:05.372 [2024-11-25 12:02:06.416973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:05.938 12:02:06 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:05.938 12:02:06 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:05.938 12:02:06 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:05.938 12:02:06 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:05.938 12:02:06 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:05.938 12:02:06 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:05.938 12:02:06 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:05.938 12:02:06 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:05.938 12:02:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:05.938 ************************************ 00:04:05.938 START TEST rpc_integrity 00:04:05.938 ************************************ 00:04:05.938 12:02:07 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:05.938 12:02:07 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:05.938 12:02:07 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:05.938 12:02:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.938 12:02:07 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:05.938 12:02:07 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:05.938 12:02:07 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:06.196 12:02:07 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:06.196 12:02:07 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:06.196 12:02:07 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.196 12:02:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.196 12:02:07 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.196 12:02:07 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:06.196 12:02:07 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:06.196 12:02:07 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.196 12:02:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.196 12:02:07 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.196 12:02:07 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:06.196 { 00:04:06.196 "name": "Malloc0", 00:04:06.196 "aliases": [ 00:04:06.196 "7b74fc11-576c-4b68-b1a0-906a27f49f91" 00:04:06.196 ], 00:04:06.196 "product_name": "Malloc disk", 00:04:06.196 "block_size": 512, 00:04:06.196 "num_blocks": 16384, 00:04:06.196 "uuid": "7b74fc11-576c-4b68-b1a0-906a27f49f91", 00:04:06.196 "assigned_rate_limits": { 00:04:06.196 "rw_ios_per_sec": 0, 00:04:06.196 "rw_mbytes_per_sec": 0, 00:04:06.196 "r_mbytes_per_sec": 0, 00:04:06.196 "w_mbytes_per_sec": 0 00:04:06.196 }, 00:04:06.196 "claimed": false, 00:04:06.196 "zoned": false, 00:04:06.196 "supported_io_types": { 00:04:06.196 "read": true, 00:04:06.196 "write": true, 00:04:06.196 "unmap": true, 00:04:06.196 "flush": true, 00:04:06.196 "reset": true, 00:04:06.196 "nvme_admin": false, 00:04:06.196 "nvme_io": false, 00:04:06.196 "nvme_io_md": false, 00:04:06.196 "write_zeroes": true, 00:04:06.196 "zcopy": true, 00:04:06.196 "get_zone_info": false, 00:04:06.196 "zone_management": false, 00:04:06.196 "zone_append": false, 00:04:06.196 "compare": false, 00:04:06.196 "compare_and_write": false, 00:04:06.196 "abort": true, 00:04:06.196 "seek_hole": false, 00:04:06.196 "seek_data": false, 00:04:06.196 "copy": true, 00:04:06.196 "nvme_iov_md": false 00:04:06.196 }, 00:04:06.196 "memory_domains": [ 00:04:06.196 { 00:04:06.196 "dma_device_id": "system", 00:04:06.196 "dma_device_type": 1 00:04:06.196 }, 00:04:06.196 { 00:04:06.196 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:06.196 "dma_device_type": 2 00:04:06.196 } 00:04:06.196 ], 00:04:06.196 "driver_specific": {} 00:04:06.196 } 00:04:06.196 ]' 00:04:06.196 12:02:07 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:06.196 12:02:07 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:06.196 12:02:07 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:06.196 12:02:07 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.196 12:02:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.196 [2024-11-25 12:02:07.112277] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:06.196 [2024-11-25 12:02:07.112432] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:06.196 [2024-11-25 12:02:07.112462] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:04:06.196 [2024-11-25 12:02:07.112475] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:06.196 [2024-11-25 12:02:07.114632] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:06.196 [2024-11-25 12:02:07.114674] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:06.196 Passthru0 00:04:06.196 12:02:07 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.196 12:02:07 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:06.196 12:02:07 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.196 12:02:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.196 12:02:07 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.196 12:02:07 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:06.196 { 00:04:06.196 "name": "Malloc0", 00:04:06.196 "aliases": [ 00:04:06.196 "7b74fc11-576c-4b68-b1a0-906a27f49f91" 00:04:06.196 ], 00:04:06.196 "product_name": "Malloc disk", 00:04:06.196 "block_size": 512, 00:04:06.196 "num_blocks": 16384, 00:04:06.196 "uuid": "7b74fc11-576c-4b68-b1a0-906a27f49f91", 00:04:06.196 "assigned_rate_limits": { 00:04:06.196 "rw_ios_per_sec": 0, 00:04:06.196 "rw_mbytes_per_sec": 0, 00:04:06.196 "r_mbytes_per_sec": 0, 00:04:06.196 "w_mbytes_per_sec": 0 00:04:06.196 }, 00:04:06.196 "claimed": true, 00:04:06.196 "claim_type": "exclusive_write", 00:04:06.196 "zoned": false, 00:04:06.196 "supported_io_types": { 00:04:06.196 "read": true, 00:04:06.196 "write": true, 00:04:06.196 "unmap": true, 00:04:06.196 "flush": true, 00:04:06.196 "reset": true, 00:04:06.196 "nvme_admin": false, 00:04:06.196 "nvme_io": false, 00:04:06.196 "nvme_io_md": false, 00:04:06.196 "write_zeroes": true, 00:04:06.196 "zcopy": true, 00:04:06.196 "get_zone_info": false, 00:04:06.196 "zone_management": false, 00:04:06.196 "zone_append": false, 00:04:06.196 "compare": false, 00:04:06.196 "compare_and_write": false, 00:04:06.196 "abort": true, 00:04:06.197 "seek_hole": false, 00:04:06.197 "seek_data": false, 00:04:06.197 "copy": true, 00:04:06.197 "nvme_iov_md": false 00:04:06.197 }, 00:04:06.197 "memory_domains": [ 00:04:06.197 { 00:04:06.197 "dma_device_id": "system", 00:04:06.197 "dma_device_type": 1 00:04:06.197 }, 00:04:06.197 { 00:04:06.197 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:06.197 "dma_device_type": 2 00:04:06.197 } 00:04:06.197 ], 00:04:06.197 "driver_specific": {} 00:04:06.197 }, 00:04:06.197 { 00:04:06.197 "name": "Passthru0", 00:04:06.197 "aliases": [ 00:04:06.197 "df2789be-35df-584d-b6d8-df9c8a527571" 00:04:06.197 ], 00:04:06.197 "product_name": "passthru", 00:04:06.197 "block_size": 512, 00:04:06.197 "num_blocks": 16384, 00:04:06.197 "uuid": "df2789be-35df-584d-b6d8-df9c8a527571", 00:04:06.197 "assigned_rate_limits": { 00:04:06.197 "rw_ios_per_sec": 0, 00:04:06.197 "rw_mbytes_per_sec": 0, 00:04:06.197 "r_mbytes_per_sec": 0, 00:04:06.197 "w_mbytes_per_sec": 0 00:04:06.197 }, 00:04:06.197 "claimed": false, 00:04:06.197 "zoned": false, 00:04:06.197 "supported_io_types": { 00:04:06.197 "read": true, 00:04:06.197 "write": true, 00:04:06.197 "unmap": true, 00:04:06.197 "flush": true, 00:04:06.197 "reset": true, 00:04:06.197 "nvme_admin": false, 00:04:06.197 "nvme_io": false, 00:04:06.197 "nvme_io_md": false, 00:04:06.197 "write_zeroes": true, 00:04:06.197 "zcopy": true, 00:04:06.197 "get_zone_info": false, 00:04:06.197 "zone_management": false, 00:04:06.197 "zone_append": false, 00:04:06.197 "compare": false, 00:04:06.197 "compare_and_write": false, 00:04:06.197 "abort": true, 00:04:06.197 "seek_hole": false, 00:04:06.197 "seek_data": false, 00:04:06.197 "copy": true, 00:04:06.197 "nvme_iov_md": false 00:04:06.197 }, 00:04:06.197 "memory_domains": [ 00:04:06.197 { 00:04:06.197 "dma_device_id": "system", 00:04:06.197 "dma_device_type": 1 00:04:06.197 }, 00:04:06.197 { 00:04:06.197 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:06.197 "dma_device_type": 2 00:04:06.197 } 00:04:06.197 ], 00:04:06.197 "driver_specific": { 00:04:06.197 "passthru": { 00:04:06.197 "name": "Passthru0", 00:04:06.197 "base_bdev_name": "Malloc0" 00:04:06.197 } 00:04:06.197 } 00:04:06.197 } 00:04:06.197 ]' 00:04:06.197 12:02:07 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:06.197 12:02:07 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:06.197 12:02:07 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:06.197 12:02:07 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.197 12:02:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.197 12:02:07 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.197 12:02:07 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:06.197 12:02:07 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.197 12:02:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.197 12:02:07 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.197 12:02:07 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:06.197 12:02:07 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.197 12:02:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.197 12:02:07 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.197 12:02:07 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:06.197 12:02:07 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:06.197 ************************************ 00:04:06.197 END TEST rpc_integrity 00:04:06.197 ************************************ 00:04:06.197 12:02:07 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:06.197 00:04:06.197 real 0m0.236s 00:04:06.197 user 0m0.126s 00:04:06.197 sys 0m0.031s 00:04:06.197 12:02:07 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:06.197 12:02:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.456 12:02:07 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:06.456 12:02:07 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:06.456 12:02:07 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:06.456 12:02:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:06.456 ************************************ 00:04:06.456 START TEST rpc_plugins 00:04:06.456 ************************************ 00:04:06.456 12:02:07 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:06.456 12:02:07 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:06.456 12:02:07 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.456 12:02:07 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:06.456 12:02:07 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.456 12:02:07 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:06.456 12:02:07 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:06.456 12:02:07 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.456 12:02:07 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:06.456 12:02:07 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.456 12:02:07 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:06.456 { 00:04:06.456 "name": "Malloc1", 00:04:06.456 "aliases": [ 00:04:06.456 "d542b96b-e62e-4736-b20b-104856f0e1ea" 00:04:06.456 ], 00:04:06.456 "product_name": "Malloc disk", 00:04:06.456 "block_size": 4096, 00:04:06.456 "num_blocks": 256, 00:04:06.456 "uuid": "d542b96b-e62e-4736-b20b-104856f0e1ea", 00:04:06.456 "assigned_rate_limits": { 00:04:06.456 "rw_ios_per_sec": 0, 00:04:06.456 "rw_mbytes_per_sec": 0, 00:04:06.456 "r_mbytes_per_sec": 0, 00:04:06.456 "w_mbytes_per_sec": 0 00:04:06.456 }, 00:04:06.456 "claimed": false, 00:04:06.456 "zoned": false, 00:04:06.456 "supported_io_types": { 00:04:06.456 "read": true, 00:04:06.456 "write": true, 00:04:06.456 "unmap": true, 00:04:06.456 "flush": true, 00:04:06.456 "reset": true, 00:04:06.456 "nvme_admin": false, 00:04:06.456 "nvme_io": false, 00:04:06.456 "nvme_io_md": false, 00:04:06.456 "write_zeroes": true, 00:04:06.456 "zcopy": true, 00:04:06.456 "get_zone_info": false, 00:04:06.456 "zone_management": false, 00:04:06.456 "zone_append": false, 00:04:06.456 "compare": false, 00:04:06.456 "compare_and_write": false, 00:04:06.456 "abort": true, 00:04:06.456 "seek_hole": false, 00:04:06.456 "seek_data": false, 00:04:06.456 "copy": true, 00:04:06.456 "nvme_iov_md": false 00:04:06.456 }, 00:04:06.456 "memory_domains": [ 00:04:06.456 { 00:04:06.456 "dma_device_id": "system", 00:04:06.456 "dma_device_type": 1 00:04:06.456 }, 00:04:06.456 { 00:04:06.456 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:06.456 "dma_device_type": 2 00:04:06.456 } 00:04:06.456 ], 00:04:06.456 "driver_specific": {} 00:04:06.456 } 00:04:06.456 ]' 00:04:06.456 12:02:07 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:06.456 12:02:07 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:06.456 12:02:07 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:06.456 12:02:07 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.456 12:02:07 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:06.456 12:02:07 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.456 12:02:07 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:06.456 12:02:07 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.456 12:02:07 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:06.456 12:02:07 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.456 12:02:07 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:06.456 12:02:07 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:06.456 ************************************ 00:04:06.456 END TEST rpc_plugins 00:04:06.456 ************************************ 00:04:06.456 12:02:07 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:06.456 00:04:06.456 real 0m0.110s 00:04:06.456 user 0m0.054s 00:04:06.456 sys 0m0.020s 00:04:06.456 12:02:07 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:06.456 12:02:07 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:06.456 12:02:07 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:06.456 12:02:07 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:06.456 12:02:07 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:06.456 12:02:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:06.456 ************************************ 00:04:06.456 START TEST rpc_trace_cmd_test 00:04:06.456 ************************************ 00:04:06.456 12:02:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:06.456 12:02:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:06.456 12:02:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:06.456 12:02:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.456 12:02:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:06.456 12:02:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.456 12:02:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:06.456 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57170", 00:04:06.456 "tpoint_group_mask": "0x8", 00:04:06.456 "iscsi_conn": { 00:04:06.456 "mask": "0x2", 00:04:06.456 "tpoint_mask": "0x0" 00:04:06.456 }, 00:04:06.456 "scsi": { 00:04:06.456 "mask": "0x4", 00:04:06.456 "tpoint_mask": "0x0" 00:04:06.456 }, 00:04:06.456 "bdev": { 00:04:06.456 "mask": "0x8", 00:04:06.456 "tpoint_mask": "0xffffffffffffffff" 00:04:06.456 }, 00:04:06.456 "nvmf_rdma": { 00:04:06.456 "mask": "0x10", 00:04:06.456 "tpoint_mask": "0x0" 00:04:06.456 }, 00:04:06.456 "nvmf_tcp": { 00:04:06.456 "mask": "0x20", 00:04:06.456 "tpoint_mask": "0x0" 00:04:06.456 }, 00:04:06.456 "ftl": { 00:04:06.456 "mask": "0x40", 00:04:06.456 "tpoint_mask": "0x0" 00:04:06.456 }, 00:04:06.456 "blobfs": { 00:04:06.456 "mask": "0x80", 00:04:06.456 "tpoint_mask": "0x0" 00:04:06.456 }, 00:04:06.456 "dsa": { 00:04:06.456 "mask": "0x200", 00:04:06.456 "tpoint_mask": "0x0" 00:04:06.456 }, 00:04:06.456 "thread": { 00:04:06.456 "mask": "0x400", 00:04:06.456 "tpoint_mask": "0x0" 00:04:06.456 }, 00:04:06.456 "nvme_pcie": { 00:04:06.456 "mask": "0x800", 00:04:06.456 "tpoint_mask": "0x0" 00:04:06.456 }, 00:04:06.456 "iaa": { 00:04:06.456 "mask": "0x1000", 00:04:06.456 "tpoint_mask": "0x0" 00:04:06.456 }, 00:04:06.456 "nvme_tcp": { 00:04:06.456 "mask": "0x2000", 00:04:06.456 "tpoint_mask": "0x0" 00:04:06.456 }, 00:04:06.456 "bdev_nvme": { 00:04:06.456 "mask": "0x4000", 00:04:06.456 "tpoint_mask": "0x0" 00:04:06.456 }, 00:04:06.456 "sock": { 00:04:06.456 "mask": "0x8000", 00:04:06.456 "tpoint_mask": "0x0" 00:04:06.456 }, 00:04:06.456 "blob": { 00:04:06.456 "mask": "0x10000", 00:04:06.456 "tpoint_mask": "0x0" 00:04:06.456 }, 00:04:06.456 "bdev_raid": { 00:04:06.456 "mask": "0x20000", 00:04:06.456 "tpoint_mask": "0x0" 00:04:06.456 }, 00:04:06.456 "scheduler": { 00:04:06.456 "mask": "0x40000", 00:04:06.456 "tpoint_mask": "0x0" 00:04:06.456 } 00:04:06.456 }' 00:04:06.456 12:02:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:06.456 12:02:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:06.456 12:02:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:06.456 12:02:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:06.456 12:02:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:06.716 12:02:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:06.716 12:02:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:06.716 12:02:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:06.716 12:02:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:06.716 ************************************ 00:04:06.716 END TEST rpc_trace_cmd_test 00:04:06.716 ************************************ 00:04:06.716 12:02:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:06.716 00:04:06.716 real 0m0.177s 00:04:06.716 user 0m0.143s 00:04:06.716 sys 0m0.024s 00:04:06.716 12:02:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:06.716 12:02:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:06.716 12:02:07 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:06.716 12:02:07 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:06.716 12:02:07 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:06.716 12:02:07 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:06.716 12:02:07 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:06.716 12:02:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:06.716 ************************************ 00:04:06.716 START TEST rpc_daemon_integrity 00:04:06.716 ************************************ 00:04:06.716 12:02:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:06.716 12:02:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:06.716 12:02:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.716 12:02:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.716 12:02:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.716 12:02:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:06.716 12:02:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:06.716 12:02:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:06.716 12:02:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:06.716 12:02:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.716 12:02:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.716 12:02:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.716 12:02:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:06.716 12:02:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:06.716 12:02:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.716 12:02:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.716 12:02:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.716 12:02:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:06.716 { 00:04:06.716 "name": "Malloc2", 00:04:06.716 "aliases": [ 00:04:06.716 "4c19df14-db80-4b9b-84a0-8b3da30039ed" 00:04:06.716 ], 00:04:06.716 "product_name": "Malloc disk", 00:04:06.716 "block_size": 512, 00:04:06.716 "num_blocks": 16384, 00:04:06.716 "uuid": "4c19df14-db80-4b9b-84a0-8b3da30039ed", 00:04:06.716 "assigned_rate_limits": { 00:04:06.716 "rw_ios_per_sec": 0, 00:04:06.716 "rw_mbytes_per_sec": 0, 00:04:06.716 "r_mbytes_per_sec": 0, 00:04:06.716 "w_mbytes_per_sec": 0 00:04:06.716 }, 00:04:06.716 "claimed": false, 00:04:06.716 "zoned": false, 00:04:06.716 "supported_io_types": { 00:04:06.716 "read": true, 00:04:06.716 "write": true, 00:04:06.716 "unmap": true, 00:04:06.716 "flush": true, 00:04:06.716 "reset": true, 00:04:06.716 "nvme_admin": false, 00:04:06.716 "nvme_io": false, 00:04:06.716 "nvme_io_md": false, 00:04:06.716 "write_zeroes": true, 00:04:06.716 "zcopy": true, 00:04:06.716 "get_zone_info": false, 00:04:06.716 "zone_management": false, 00:04:06.716 "zone_append": false, 00:04:06.716 "compare": false, 00:04:06.716 "compare_and_write": false, 00:04:06.716 "abort": true, 00:04:06.716 "seek_hole": false, 00:04:06.716 "seek_data": false, 00:04:06.716 "copy": true, 00:04:06.716 "nvme_iov_md": false 00:04:06.716 }, 00:04:06.716 "memory_domains": [ 00:04:06.716 { 00:04:06.716 "dma_device_id": "system", 00:04:06.716 "dma_device_type": 1 00:04:06.716 }, 00:04:06.716 { 00:04:06.716 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:06.716 "dma_device_type": 2 00:04:06.716 } 00:04:06.716 ], 00:04:06.716 "driver_specific": {} 00:04:06.716 } 00:04:06.716 ]' 00:04:06.716 12:02:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:06.716 12:02:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:06.716 12:02:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:06.716 12:02:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.716 12:02:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.716 [2024-11-25 12:02:07.779448] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:06.716 [2024-11-25 12:02:07.779591] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:06.716 [2024-11-25 12:02:07.779618] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:04:06.716 [2024-11-25 12:02:07.779630] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:06.716 [2024-11-25 12:02:07.781766] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:06.716 [2024-11-25 12:02:07.781801] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:06.716 Passthru0 00:04:06.716 12:02:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.716 12:02:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:06.716 12:02:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.716 12:02:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.029 12:02:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:07.029 12:02:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:07.029 { 00:04:07.029 "name": "Malloc2", 00:04:07.029 "aliases": [ 00:04:07.029 "4c19df14-db80-4b9b-84a0-8b3da30039ed" 00:04:07.029 ], 00:04:07.029 "product_name": "Malloc disk", 00:04:07.029 "block_size": 512, 00:04:07.029 "num_blocks": 16384, 00:04:07.029 "uuid": "4c19df14-db80-4b9b-84a0-8b3da30039ed", 00:04:07.029 "assigned_rate_limits": { 00:04:07.029 "rw_ios_per_sec": 0, 00:04:07.029 "rw_mbytes_per_sec": 0, 00:04:07.029 "r_mbytes_per_sec": 0, 00:04:07.029 "w_mbytes_per_sec": 0 00:04:07.029 }, 00:04:07.029 "claimed": true, 00:04:07.029 "claim_type": "exclusive_write", 00:04:07.029 "zoned": false, 00:04:07.029 "supported_io_types": { 00:04:07.029 "read": true, 00:04:07.029 "write": true, 00:04:07.029 "unmap": true, 00:04:07.029 "flush": true, 00:04:07.029 "reset": true, 00:04:07.029 "nvme_admin": false, 00:04:07.029 "nvme_io": false, 00:04:07.029 "nvme_io_md": false, 00:04:07.029 "write_zeroes": true, 00:04:07.029 "zcopy": true, 00:04:07.029 "get_zone_info": false, 00:04:07.029 "zone_management": false, 00:04:07.029 "zone_append": false, 00:04:07.029 "compare": false, 00:04:07.029 "compare_and_write": false, 00:04:07.029 "abort": true, 00:04:07.029 "seek_hole": false, 00:04:07.029 "seek_data": false, 00:04:07.029 "copy": true, 00:04:07.029 "nvme_iov_md": false 00:04:07.029 }, 00:04:07.029 "memory_domains": [ 00:04:07.029 { 00:04:07.029 "dma_device_id": "system", 00:04:07.029 "dma_device_type": 1 00:04:07.029 }, 00:04:07.029 { 00:04:07.029 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:07.029 "dma_device_type": 2 00:04:07.029 } 00:04:07.029 ], 00:04:07.029 "driver_specific": {} 00:04:07.029 }, 00:04:07.029 { 00:04:07.029 "name": "Passthru0", 00:04:07.029 "aliases": [ 00:04:07.029 "15e18e72-9ece-5200-a022-b2f83028f59c" 00:04:07.029 ], 00:04:07.029 "product_name": "passthru", 00:04:07.029 "block_size": 512, 00:04:07.029 "num_blocks": 16384, 00:04:07.029 "uuid": "15e18e72-9ece-5200-a022-b2f83028f59c", 00:04:07.029 "assigned_rate_limits": { 00:04:07.029 "rw_ios_per_sec": 0, 00:04:07.029 "rw_mbytes_per_sec": 0, 00:04:07.029 "r_mbytes_per_sec": 0, 00:04:07.029 "w_mbytes_per_sec": 0 00:04:07.029 }, 00:04:07.029 "claimed": false, 00:04:07.029 "zoned": false, 00:04:07.029 "supported_io_types": { 00:04:07.029 "read": true, 00:04:07.029 "write": true, 00:04:07.029 "unmap": true, 00:04:07.029 "flush": true, 00:04:07.029 "reset": true, 00:04:07.029 "nvme_admin": false, 00:04:07.029 "nvme_io": false, 00:04:07.029 "nvme_io_md": false, 00:04:07.029 "write_zeroes": true, 00:04:07.029 "zcopy": true, 00:04:07.029 "get_zone_info": false, 00:04:07.029 "zone_management": false, 00:04:07.029 "zone_append": false, 00:04:07.029 "compare": false, 00:04:07.029 "compare_and_write": false, 00:04:07.029 "abort": true, 00:04:07.029 "seek_hole": false, 00:04:07.029 "seek_data": false, 00:04:07.029 "copy": true, 00:04:07.029 "nvme_iov_md": false 00:04:07.029 }, 00:04:07.029 "memory_domains": [ 00:04:07.029 { 00:04:07.029 "dma_device_id": "system", 00:04:07.029 "dma_device_type": 1 00:04:07.029 }, 00:04:07.029 { 00:04:07.029 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:07.029 "dma_device_type": 2 00:04:07.029 } 00:04:07.029 ], 00:04:07.029 "driver_specific": { 00:04:07.029 "passthru": { 00:04:07.029 "name": "Passthru0", 00:04:07.029 "base_bdev_name": "Malloc2" 00:04:07.029 } 00:04:07.029 } 00:04:07.029 } 00:04:07.029 ]' 00:04:07.029 12:02:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:07.029 12:02:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:07.029 12:02:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:07.029 12:02:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:07.029 12:02:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.029 12:02:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:07.029 12:02:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:07.029 12:02:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:07.029 12:02:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.029 12:02:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:07.029 12:02:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:07.029 12:02:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:07.029 12:02:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.029 12:02:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:07.029 12:02:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:07.029 12:02:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:07.029 ************************************ 00:04:07.029 END TEST rpc_daemon_integrity 00:04:07.029 ************************************ 00:04:07.029 12:02:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:07.029 00:04:07.029 real 0m0.258s 00:04:07.029 user 0m0.130s 00:04:07.029 sys 0m0.043s 00:04:07.029 12:02:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:07.029 12:02:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.029 12:02:07 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:07.029 12:02:07 rpc -- rpc/rpc.sh@84 -- # killprocess 57170 00:04:07.029 12:02:07 rpc -- common/autotest_common.sh@954 -- # '[' -z 57170 ']' 00:04:07.029 12:02:07 rpc -- common/autotest_common.sh@958 -- # kill -0 57170 00:04:07.029 12:02:07 rpc -- common/autotest_common.sh@959 -- # uname 00:04:07.029 12:02:07 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:07.029 12:02:07 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57170 00:04:07.029 killing process with pid 57170 00:04:07.029 12:02:07 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:07.029 12:02:07 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:07.029 12:02:07 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57170' 00:04:07.029 12:02:07 rpc -- common/autotest_common.sh@973 -- # kill 57170 00:04:07.029 12:02:07 rpc -- common/autotest_common.sh@978 -- # wait 57170 00:04:08.438 00:04:08.438 real 0m3.555s 00:04:08.438 user 0m3.962s 00:04:08.438 sys 0m0.609s 00:04:08.438 12:02:09 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:08.438 ************************************ 00:04:08.438 END TEST rpc 00:04:08.438 ************************************ 00:04:08.438 12:02:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.696 12:02:09 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:08.696 12:02:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:08.696 12:02:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:08.696 12:02:09 -- common/autotest_common.sh@10 -- # set +x 00:04:08.696 ************************************ 00:04:08.696 START TEST skip_rpc 00:04:08.696 ************************************ 00:04:08.696 12:02:09 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:08.696 * Looking for test storage... 00:04:08.696 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:08.696 12:02:09 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:08.696 12:02:09 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:08.696 12:02:09 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:08.696 12:02:09 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:08.696 12:02:09 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:08.696 12:02:09 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:08.696 12:02:09 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:08.696 12:02:09 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:08.696 12:02:09 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:08.696 12:02:09 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:08.696 12:02:09 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:08.696 12:02:09 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:08.696 12:02:09 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:08.696 12:02:09 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:08.696 12:02:09 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:08.696 12:02:09 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:08.696 12:02:09 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:08.696 12:02:09 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:08.696 12:02:09 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:08.696 12:02:09 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:08.696 12:02:09 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:08.696 12:02:09 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:08.696 12:02:09 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:08.696 12:02:09 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:08.696 12:02:09 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:08.697 12:02:09 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:08.697 12:02:09 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:08.697 12:02:09 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:08.697 12:02:09 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:08.697 12:02:09 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:08.697 12:02:09 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:08.697 12:02:09 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:08.697 12:02:09 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:08.697 12:02:09 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:08.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.697 --rc genhtml_branch_coverage=1 00:04:08.697 --rc genhtml_function_coverage=1 00:04:08.697 --rc genhtml_legend=1 00:04:08.697 --rc geninfo_all_blocks=1 00:04:08.697 --rc geninfo_unexecuted_blocks=1 00:04:08.697 00:04:08.697 ' 00:04:08.697 12:02:09 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:08.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.697 --rc genhtml_branch_coverage=1 00:04:08.697 --rc genhtml_function_coverage=1 00:04:08.697 --rc genhtml_legend=1 00:04:08.697 --rc geninfo_all_blocks=1 00:04:08.697 --rc geninfo_unexecuted_blocks=1 00:04:08.697 00:04:08.697 ' 00:04:08.697 12:02:09 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:08.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.697 --rc genhtml_branch_coverage=1 00:04:08.697 --rc genhtml_function_coverage=1 00:04:08.697 --rc genhtml_legend=1 00:04:08.697 --rc geninfo_all_blocks=1 00:04:08.697 --rc geninfo_unexecuted_blocks=1 00:04:08.697 00:04:08.697 ' 00:04:08.697 12:02:09 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:08.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.697 --rc genhtml_branch_coverage=1 00:04:08.697 --rc genhtml_function_coverage=1 00:04:08.697 --rc genhtml_legend=1 00:04:08.697 --rc geninfo_all_blocks=1 00:04:08.697 --rc geninfo_unexecuted_blocks=1 00:04:08.697 00:04:08.697 ' 00:04:08.697 12:02:09 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:08.697 12:02:09 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:08.697 12:02:09 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:08.697 12:02:09 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:08.697 12:02:09 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:08.697 12:02:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.697 ************************************ 00:04:08.697 START TEST skip_rpc 00:04:08.697 ************************************ 00:04:08.697 12:02:09 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:08.697 12:02:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57383 00:04:08.697 12:02:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:08.697 12:02:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:08.697 12:02:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:08.956 [2024-11-25 12:02:09.787406] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:04:08.956 [2024-11-25 12:02:09.787695] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57383 ] 00:04:08.956 [2024-11-25 12:02:09.947094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:09.214 [2024-11-25 12:02:10.042697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.478 12:02:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:14.478 12:02:14 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:14.478 12:02:14 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:14.478 12:02:14 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:14.478 12:02:14 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:14.478 12:02:14 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:14.478 12:02:14 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:14.478 12:02:14 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:14.478 12:02:14 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.478 12:02:14 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.478 12:02:14 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:14.478 12:02:14 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:14.478 12:02:14 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:14.478 12:02:14 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:14.478 12:02:14 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:14.478 12:02:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:14.478 12:02:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57383 00:04:14.478 12:02:14 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57383 ']' 00:04:14.478 12:02:14 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57383 00:04:14.478 12:02:14 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:14.478 12:02:14 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:14.478 12:02:14 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57383 00:04:14.478 killing process with pid 57383 00:04:14.478 12:02:14 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:14.478 12:02:14 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:14.478 12:02:14 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57383' 00:04:14.478 12:02:14 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57383 00:04:14.478 12:02:14 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57383 00:04:15.043 ************************************ 00:04:15.043 END TEST skip_rpc 00:04:15.043 ************************************ 00:04:15.043 00:04:15.043 real 0m6.234s 00:04:15.043 user 0m5.855s 00:04:15.043 sys 0m0.273s 00:04:15.043 12:02:15 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:15.043 12:02:15 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.043 12:02:15 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:15.043 12:02:15 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:15.043 12:02:15 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.043 12:02:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.043 ************************************ 00:04:15.043 START TEST skip_rpc_with_json 00:04:15.043 ************************************ 00:04:15.043 12:02:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:15.043 12:02:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:15.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:15.043 12:02:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57476 00:04:15.043 12:02:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:15.043 12:02:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57476 00:04:15.043 12:02:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57476 ']' 00:04:15.043 12:02:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:15.043 12:02:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:15.043 12:02:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:15.043 12:02:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:15.043 12:02:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:15.043 12:02:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:15.043 [2024-11-25 12:02:16.056808] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:04:15.043 [2024-11-25 12:02:16.056913] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57476 ] 00:04:15.300 [2024-11-25 12:02:16.207458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:15.300 [2024-11-25 12:02:16.285510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.864 12:02:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:15.864 12:02:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:15.864 12:02:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:15.864 12:02:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.864 12:02:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:15.864 [2024-11-25 12:02:16.860108] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:15.864 request: 00:04:15.864 { 00:04:15.864 "trtype": "tcp", 00:04:15.864 "method": "nvmf_get_transports", 00:04:15.864 "req_id": 1 00:04:15.864 } 00:04:15.864 Got JSON-RPC error response 00:04:15.864 response: 00:04:15.864 { 00:04:15.864 "code": -19, 00:04:15.864 "message": "No such device" 00:04:15.864 } 00:04:15.864 12:02:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:15.864 12:02:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:15.864 12:02:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.864 12:02:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:15.864 [2024-11-25 12:02:16.868199] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:15.864 12:02:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.864 12:02:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:15.864 12:02:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.864 12:02:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:16.122 12:02:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.122 12:02:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:16.122 { 00:04:16.122 "subsystems": [ 00:04:16.122 { 00:04:16.122 "subsystem": "fsdev", 00:04:16.122 "config": [ 00:04:16.122 { 00:04:16.122 "method": "fsdev_set_opts", 00:04:16.122 "params": { 00:04:16.122 "fsdev_io_pool_size": 65535, 00:04:16.122 "fsdev_io_cache_size": 256 00:04:16.122 } 00:04:16.122 } 00:04:16.122 ] 00:04:16.122 }, 00:04:16.122 { 00:04:16.122 "subsystem": "keyring", 00:04:16.122 "config": [] 00:04:16.122 }, 00:04:16.122 { 00:04:16.122 "subsystem": "iobuf", 00:04:16.122 "config": [ 00:04:16.122 { 00:04:16.122 "method": "iobuf_set_options", 00:04:16.122 "params": { 00:04:16.122 "small_pool_count": 8192, 00:04:16.122 "large_pool_count": 1024, 00:04:16.122 "small_bufsize": 8192, 00:04:16.122 "large_bufsize": 135168, 00:04:16.122 "enable_numa": false 00:04:16.122 } 00:04:16.122 } 00:04:16.122 ] 00:04:16.122 }, 00:04:16.122 { 00:04:16.122 "subsystem": "sock", 00:04:16.122 "config": [ 00:04:16.122 { 00:04:16.122 "method": "sock_set_default_impl", 00:04:16.122 "params": { 00:04:16.122 "impl_name": "posix" 00:04:16.122 } 00:04:16.122 }, 00:04:16.122 { 00:04:16.122 "method": "sock_impl_set_options", 00:04:16.122 "params": { 00:04:16.122 "impl_name": "ssl", 00:04:16.122 "recv_buf_size": 4096, 00:04:16.122 "send_buf_size": 4096, 00:04:16.122 "enable_recv_pipe": true, 00:04:16.122 "enable_quickack": false, 00:04:16.122 "enable_placement_id": 0, 00:04:16.122 "enable_zerocopy_send_server": true, 00:04:16.122 "enable_zerocopy_send_client": false, 00:04:16.122 "zerocopy_threshold": 0, 00:04:16.122 "tls_version": 0, 00:04:16.122 "enable_ktls": false 00:04:16.122 } 00:04:16.122 }, 00:04:16.122 { 00:04:16.122 "method": "sock_impl_set_options", 00:04:16.122 "params": { 00:04:16.122 "impl_name": "posix", 00:04:16.122 "recv_buf_size": 2097152, 00:04:16.122 "send_buf_size": 2097152, 00:04:16.122 "enable_recv_pipe": true, 00:04:16.122 "enable_quickack": false, 00:04:16.122 "enable_placement_id": 0, 00:04:16.122 "enable_zerocopy_send_server": true, 00:04:16.122 "enable_zerocopy_send_client": false, 00:04:16.122 "zerocopy_threshold": 0, 00:04:16.122 "tls_version": 0, 00:04:16.122 "enable_ktls": false 00:04:16.122 } 00:04:16.122 } 00:04:16.122 ] 00:04:16.122 }, 00:04:16.122 { 00:04:16.122 "subsystem": "vmd", 00:04:16.122 "config": [] 00:04:16.122 }, 00:04:16.122 { 00:04:16.122 "subsystem": "accel", 00:04:16.122 "config": [ 00:04:16.122 { 00:04:16.122 "method": "accel_set_options", 00:04:16.122 "params": { 00:04:16.122 "small_cache_size": 128, 00:04:16.122 "large_cache_size": 16, 00:04:16.122 "task_count": 2048, 00:04:16.122 "sequence_count": 2048, 00:04:16.122 "buf_count": 2048 00:04:16.122 } 00:04:16.122 } 00:04:16.122 ] 00:04:16.122 }, 00:04:16.122 { 00:04:16.122 "subsystem": "bdev", 00:04:16.122 "config": [ 00:04:16.122 { 00:04:16.122 "method": "bdev_set_options", 00:04:16.122 "params": { 00:04:16.122 "bdev_io_pool_size": 65535, 00:04:16.122 "bdev_io_cache_size": 256, 00:04:16.122 "bdev_auto_examine": true, 00:04:16.122 "iobuf_small_cache_size": 128, 00:04:16.122 "iobuf_large_cache_size": 16 00:04:16.122 } 00:04:16.122 }, 00:04:16.122 { 00:04:16.122 "method": "bdev_raid_set_options", 00:04:16.122 "params": { 00:04:16.122 "process_window_size_kb": 1024, 00:04:16.122 "process_max_bandwidth_mb_sec": 0 00:04:16.122 } 00:04:16.122 }, 00:04:16.122 { 00:04:16.122 "method": "bdev_iscsi_set_options", 00:04:16.122 "params": { 00:04:16.122 "timeout_sec": 30 00:04:16.122 } 00:04:16.122 }, 00:04:16.122 { 00:04:16.122 "method": "bdev_nvme_set_options", 00:04:16.122 "params": { 00:04:16.122 "action_on_timeout": "none", 00:04:16.122 "timeout_us": 0, 00:04:16.122 "timeout_admin_us": 0, 00:04:16.122 "keep_alive_timeout_ms": 10000, 00:04:16.122 "arbitration_burst": 0, 00:04:16.122 "low_priority_weight": 0, 00:04:16.122 "medium_priority_weight": 0, 00:04:16.122 "high_priority_weight": 0, 00:04:16.122 "nvme_adminq_poll_period_us": 10000, 00:04:16.122 "nvme_ioq_poll_period_us": 0, 00:04:16.122 "io_queue_requests": 0, 00:04:16.122 "delay_cmd_submit": true, 00:04:16.122 "transport_retry_count": 4, 00:04:16.122 "bdev_retry_count": 3, 00:04:16.122 "transport_ack_timeout": 0, 00:04:16.122 "ctrlr_loss_timeout_sec": 0, 00:04:16.122 "reconnect_delay_sec": 0, 00:04:16.123 "fast_io_fail_timeout_sec": 0, 00:04:16.123 "disable_auto_failback": false, 00:04:16.123 "generate_uuids": false, 00:04:16.123 "transport_tos": 0, 00:04:16.123 "nvme_error_stat": false, 00:04:16.123 "rdma_srq_size": 0, 00:04:16.123 "io_path_stat": false, 00:04:16.123 "allow_accel_sequence": false, 00:04:16.123 "rdma_max_cq_size": 0, 00:04:16.123 "rdma_cm_event_timeout_ms": 0, 00:04:16.123 "dhchap_digests": [ 00:04:16.123 "sha256", 00:04:16.123 "sha384", 00:04:16.123 "sha512" 00:04:16.123 ], 00:04:16.123 "dhchap_dhgroups": [ 00:04:16.123 "null", 00:04:16.123 "ffdhe2048", 00:04:16.123 "ffdhe3072", 00:04:16.123 "ffdhe4096", 00:04:16.123 "ffdhe6144", 00:04:16.123 "ffdhe8192" 00:04:16.123 ] 00:04:16.123 } 00:04:16.123 }, 00:04:16.123 { 00:04:16.123 "method": "bdev_nvme_set_hotplug", 00:04:16.123 "params": { 00:04:16.123 "period_us": 100000, 00:04:16.123 "enable": false 00:04:16.123 } 00:04:16.123 }, 00:04:16.123 { 00:04:16.123 "method": "bdev_wait_for_examine" 00:04:16.123 } 00:04:16.123 ] 00:04:16.123 }, 00:04:16.123 { 00:04:16.123 "subsystem": "scsi", 00:04:16.123 "config": null 00:04:16.123 }, 00:04:16.123 { 00:04:16.123 "subsystem": "scheduler", 00:04:16.123 "config": [ 00:04:16.123 { 00:04:16.123 "method": "framework_set_scheduler", 00:04:16.123 "params": { 00:04:16.123 "name": "static" 00:04:16.123 } 00:04:16.123 } 00:04:16.123 ] 00:04:16.123 }, 00:04:16.123 { 00:04:16.123 "subsystem": "vhost_scsi", 00:04:16.123 "config": [] 00:04:16.123 }, 00:04:16.123 { 00:04:16.123 "subsystem": "vhost_blk", 00:04:16.123 "config": [] 00:04:16.123 }, 00:04:16.123 { 00:04:16.123 "subsystem": "ublk", 00:04:16.123 "config": [] 00:04:16.123 }, 00:04:16.123 { 00:04:16.123 "subsystem": "nbd", 00:04:16.123 "config": [] 00:04:16.123 }, 00:04:16.123 { 00:04:16.123 "subsystem": "nvmf", 00:04:16.123 "config": [ 00:04:16.123 { 00:04:16.123 "method": "nvmf_set_config", 00:04:16.123 "params": { 00:04:16.123 "discovery_filter": "match_any", 00:04:16.123 "admin_cmd_passthru": { 00:04:16.123 "identify_ctrlr": false 00:04:16.123 }, 00:04:16.123 "dhchap_digests": [ 00:04:16.123 "sha256", 00:04:16.123 "sha384", 00:04:16.123 "sha512" 00:04:16.123 ], 00:04:16.123 "dhchap_dhgroups": [ 00:04:16.123 "null", 00:04:16.123 "ffdhe2048", 00:04:16.123 "ffdhe3072", 00:04:16.123 "ffdhe4096", 00:04:16.123 "ffdhe6144", 00:04:16.123 "ffdhe8192" 00:04:16.123 ] 00:04:16.123 } 00:04:16.123 }, 00:04:16.123 { 00:04:16.123 "method": "nvmf_set_max_subsystems", 00:04:16.123 "params": { 00:04:16.123 "max_subsystems": 1024 00:04:16.123 } 00:04:16.123 }, 00:04:16.123 { 00:04:16.123 "method": "nvmf_set_crdt", 00:04:16.123 "params": { 00:04:16.123 "crdt1": 0, 00:04:16.123 "crdt2": 0, 00:04:16.123 "crdt3": 0 00:04:16.123 } 00:04:16.123 }, 00:04:16.123 { 00:04:16.123 "method": "nvmf_create_transport", 00:04:16.123 "params": { 00:04:16.123 "trtype": "TCP", 00:04:16.123 "max_queue_depth": 128, 00:04:16.123 "max_io_qpairs_per_ctrlr": 127, 00:04:16.123 "in_capsule_data_size": 4096, 00:04:16.123 "max_io_size": 131072, 00:04:16.123 "io_unit_size": 131072, 00:04:16.123 "max_aq_depth": 128, 00:04:16.123 "num_shared_buffers": 511, 00:04:16.123 "buf_cache_size": 4294967295, 00:04:16.123 "dif_insert_or_strip": false, 00:04:16.123 "zcopy": false, 00:04:16.123 "c2h_success": true, 00:04:16.123 "sock_priority": 0, 00:04:16.123 "abort_timeout_sec": 1, 00:04:16.123 "ack_timeout": 0, 00:04:16.123 "data_wr_pool_size": 0 00:04:16.123 } 00:04:16.123 } 00:04:16.123 ] 00:04:16.123 }, 00:04:16.123 { 00:04:16.123 "subsystem": "iscsi", 00:04:16.123 "config": [ 00:04:16.123 { 00:04:16.123 "method": "iscsi_set_options", 00:04:16.123 "params": { 00:04:16.123 "node_base": "iqn.2016-06.io.spdk", 00:04:16.123 "max_sessions": 128, 00:04:16.123 "max_connections_per_session": 2, 00:04:16.123 "max_queue_depth": 64, 00:04:16.123 "default_time2wait": 2, 00:04:16.123 "default_time2retain": 20, 00:04:16.123 "first_burst_length": 8192, 00:04:16.123 "immediate_data": true, 00:04:16.123 "allow_duplicated_isid": false, 00:04:16.123 "error_recovery_level": 0, 00:04:16.123 "nop_timeout": 60, 00:04:16.123 "nop_in_interval": 30, 00:04:16.123 "disable_chap": false, 00:04:16.123 "require_chap": false, 00:04:16.123 "mutual_chap": false, 00:04:16.123 "chap_group": 0, 00:04:16.123 "max_large_datain_per_connection": 64, 00:04:16.123 "max_r2t_per_connection": 4, 00:04:16.123 "pdu_pool_size": 36864, 00:04:16.123 "immediate_data_pool_size": 16384, 00:04:16.123 "data_out_pool_size": 2048 00:04:16.123 } 00:04:16.123 } 00:04:16.123 ] 00:04:16.123 } 00:04:16.123 ] 00:04:16.123 } 00:04:16.123 12:02:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:16.123 12:02:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57476 00:04:16.123 12:02:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57476 ']' 00:04:16.123 12:02:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57476 00:04:16.123 12:02:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:16.123 12:02:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:16.123 12:02:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57476 00:04:16.123 killing process with pid 57476 00:04:16.123 12:02:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:16.123 12:02:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:16.123 12:02:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57476' 00:04:16.123 12:02:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57476 00:04:16.123 12:02:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57476 00:04:17.535 12:02:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:17.535 12:02:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57515 00:04:17.535 12:02:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:22.807 12:02:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57515 00:04:22.807 12:02:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57515 ']' 00:04:22.807 12:02:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57515 00:04:22.807 12:02:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:22.807 12:02:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:22.807 12:02:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57515 00:04:22.807 killing process with pid 57515 00:04:22.807 12:02:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:22.807 12:02:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:22.807 12:02:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57515' 00:04:22.807 12:02:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57515 00:04:22.807 12:02:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57515 00:04:23.744 12:02:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:23.744 12:02:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:24.001 ************************************ 00:04:24.001 END TEST skip_rpc_with_json 00:04:24.001 ************************************ 00:04:24.001 00:04:24.001 real 0m8.833s 00:04:24.001 user 0m8.437s 00:04:24.001 sys 0m0.580s 00:04:24.001 12:02:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:24.001 12:02:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:24.001 12:02:24 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:24.001 12:02:24 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:24.001 12:02:24 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:24.001 12:02:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.001 ************************************ 00:04:24.001 START TEST skip_rpc_with_delay 00:04:24.001 ************************************ 00:04:24.001 12:02:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:24.001 12:02:24 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:24.001 12:02:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:24.001 12:02:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:24.001 12:02:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:24.001 12:02:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:24.002 12:02:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:24.002 12:02:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:24.002 12:02:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:24.002 12:02:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:24.002 12:02:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:24.002 12:02:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:24.002 12:02:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:24.002 [2024-11-25 12:02:24.945184] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:24.002 12:02:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:24.002 12:02:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:24.002 12:02:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:24.002 12:02:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:24.002 00:04:24.002 real 0m0.130s 00:04:24.002 user 0m0.061s 00:04:24.002 sys 0m0.067s 00:04:24.002 12:02:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:24.002 ************************************ 00:04:24.002 END TEST skip_rpc_with_delay 00:04:24.002 ************************************ 00:04:24.002 12:02:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:24.002 12:02:25 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:24.002 12:02:25 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:24.002 12:02:25 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:24.002 12:02:25 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:24.002 12:02:25 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:24.002 12:02:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.002 ************************************ 00:04:24.002 START TEST exit_on_failed_rpc_init 00:04:24.002 ************************************ 00:04:24.002 12:02:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:24.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:24.002 12:02:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57638 00:04:24.002 12:02:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57638 00:04:24.002 12:02:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57638 ']' 00:04:24.002 12:02:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:24.002 12:02:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:24.002 12:02:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:24.002 12:02:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:24.002 12:02:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:24.002 12:02:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:24.259 [2024-11-25 12:02:25.097171] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:04:24.259 [2024-11-25 12:02:25.097453] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57638 ] 00:04:24.259 [2024-11-25 12:02:25.252052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.516 [2024-11-25 12:02:25.351381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.082 12:02:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:25.082 12:02:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:25.082 12:02:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:25.082 12:02:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:25.082 12:02:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:25.082 12:02:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:25.082 12:02:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:25.082 12:02:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:25.082 12:02:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:25.082 12:02:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:25.082 12:02:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:25.082 12:02:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:25.082 12:02:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:25.082 12:02:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:25.082 12:02:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:25.082 [2024-11-25 12:02:26.010766] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:04:25.082 [2024-11-25 12:02:26.010880] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57656 ] 00:04:25.342 [2024-11-25 12:02:26.172777] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.342 [2024-11-25 12:02:26.273262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:25.342 [2024-11-25 12:02:26.273354] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:25.342 [2024-11-25 12:02:26.273367] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:25.342 [2024-11-25 12:02:26.273380] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:25.601 12:02:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:25.601 12:02:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:25.601 12:02:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:25.601 12:02:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:25.601 12:02:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:25.601 12:02:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:25.601 12:02:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:25.602 12:02:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57638 00:04:25.602 12:02:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57638 ']' 00:04:25.602 12:02:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57638 00:04:25.602 12:02:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:25.602 12:02:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:25.602 12:02:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57638 00:04:25.602 killing process with pid 57638 00:04:25.602 12:02:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:25.602 12:02:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:25.602 12:02:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57638' 00:04:25.602 12:02:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57638 00:04:25.602 12:02:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57638 00:04:26.973 00:04:26.973 real 0m2.794s 00:04:26.973 user 0m3.109s 00:04:26.973 sys 0m0.395s 00:04:26.973 12:02:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:26.973 ************************************ 00:04:26.973 END TEST exit_on_failed_rpc_init 00:04:26.973 ************************************ 00:04:26.973 12:02:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:26.973 12:02:27 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:26.973 ************************************ 00:04:26.973 END TEST skip_rpc 00:04:26.973 ************************************ 00:04:26.973 00:04:26.973 real 0m18.313s 00:04:26.973 user 0m17.600s 00:04:26.973 sys 0m1.495s 00:04:26.973 12:02:27 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:26.973 12:02:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.973 12:02:27 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:26.973 12:02:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:26.973 12:02:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:26.973 12:02:27 -- common/autotest_common.sh@10 -- # set +x 00:04:26.973 ************************************ 00:04:26.973 START TEST rpc_client 00:04:26.973 ************************************ 00:04:26.973 12:02:27 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:26.973 * Looking for test storage... 00:04:26.973 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:26.973 12:02:27 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:26.973 12:02:27 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:04:26.973 12:02:27 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:26.973 12:02:28 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:26.973 12:02:28 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:26.973 12:02:28 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:26.973 12:02:28 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:26.973 12:02:28 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:26.973 12:02:28 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:26.973 12:02:28 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:26.973 12:02:28 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:26.973 12:02:28 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:26.973 12:02:28 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:26.973 12:02:28 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:26.973 12:02:28 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:26.973 12:02:28 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:26.973 12:02:28 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:26.973 12:02:28 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:26.973 12:02:28 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:26.973 12:02:28 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:26.973 12:02:28 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:26.973 12:02:28 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:26.973 12:02:28 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:26.973 12:02:28 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:26.973 12:02:28 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:26.973 12:02:28 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:26.973 12:02:28 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:26.973 12:02:28 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:26.973 12:02:28 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:26.973 12:02:28 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:26.973 12:02:28 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:26.973 12:02:28 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:26.973 12:02:28 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:26.973 12:02:28 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:26.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.973 --rc genhtml_branch_coverage=1 00:04:26.973 --rc genhtml_function_coverage=1 00:04:26.973 --rc genhtml_legend=1 00:04:26.973 --rc geninfo_all_blocks=1 00:04:26.973 --rc geninfo_unexecuted_blocks=1 00:04:26.973 00:04:26.973 ' 00:04:26.973 12:02:28 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:26.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.973 --rc genhtml_branch_coverage=1 00:04:26.973 --rc genhtml_function_coverage=1 00:04:26.973 --rc genhtml_legend=1 00:04:26.973 --rc geninfo_all_blocks=1 00:04:26.973 --rc geninfo_unexecuted_blocks=1 00:04:26.973 00:04:26.973 ' 00:04:26.973 12:02:28 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:26.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.973 --rc genhtml_branch_coverage=1 00:04:26.973 --rc genhtml_function_coverage=1 00:04:26.973 --rc genhtml_legend=1 00:04:26.973 --rc geninfo_all_blocks=1 00:04:26.973 --rc geninfo_unexecuted_blocks=1 00:04:26.973 00:04:26.973 ' 00:04:26.973 12:02:28 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:26.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.973 --rc genhtml_branch_coverage=1 00:04:26.973 --rc genhtml_function_coverage=1 00:04:26.973 --rc genhtml_legend=1 00:04:26.973 --rc geninfo_all_blocks=1 00:04:26.973 --rc geninfo_unexecuted_blocks=1 00:04:26.973 00:04:26.973 ' 00:04:26.974 12:02:28 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:27.276 OK 00:04:27.276 12:02:28 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:27.276 00:04:27.276 real 0m0.182s 00:04:27.276 user 0m0.109s 00:04:27.276 sys 0m0.082s 00:04:27.276 12:02:28 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:27.276 12:02:28 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:27.276 ************************************ 00:04:27.276 END TEST rpc_client 00:04:27.276 ************************************ 00:04:27.276 12:02:28 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:27.276 12:02:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:27.276 12:02:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.276 12:02:28 -- common/autotest_common.sh@10 -- # set +x 00:04:27.276 ************************************ 00:04:27.276 START TEST json_config 00:04:27.276 ************************************ 00:04:27.276 12:02:28 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:27.276 12:02:28 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:27.276 12:02:28 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:04:27.276 12:02:28 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:27.276 12:02:28 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:27.277 12:02:28 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:27.277 12:02:28 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:27.277 12:02:28 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:27.277 12:02:28 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:27.277 12:02:28 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:27.277 12:02:28 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:27.277 12:02:28 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:27.277 12:02:28 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:27.277 12:02:28 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:27.277 12:02:28 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:27.277 12:02:28 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:27.277 12:02:28 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:27.277 12:02:28 json_config -- scripts/common.sh@345 -- # : 1 00:04:27.277 12:02:28 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:27.277 12:02:28 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:27.277 12:02:28 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:27.277 12:02:28 json_config -- scripts/common.sh@353 -- # local d=1 00:04:27.277 12:02:28 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:27.277 12:02:28 json_config -- scripts/common.sh@355 -- # echo 1 00:04:27.277 12:02:28 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:27.277 12:02:28 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:27.277 12:02:28 json_config -- scripts/common.sh@353 -- # local d=2 00:04:27.277 12:02:28 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:27.277 12:02:28 json_config -- scripts/common.sh@355 -- # echo 2 00:04:27.277 12:02:28 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:27.277 12:02:28 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:27.277 12:02:28 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:27.277 12:02:28 json_config -- scripts/common.sh@368 -- # return 0 00:04:27.277 12:02:28 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:27.277 12:02:28 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:27.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.277 --rc genhtml_branch_coverage=1 00:04:27.277 --rc genhtml_function_coverage=1 00:04:27.277 --rc genhtml_legend=1 00:04:27.277 --rc geninfo_all_blocks=1 00:04:27.277 --rc geninfo_unexecuted_blocks=1 00:04:27.277 00:04:27.277 ' 00:04:27.277 12:02:28 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:27.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.277 --rc genhtml_branch_coverage=1 00:04:27.277 --rc genhtml_function_coverage=1 00:04:27.277 --rc genhtml_legend=1 00:04:27.277 --rc geninfo_all_blocks=1 00:04:27.277 --rc geninfo_unexecuted_blocks=1 00:04:27.277 00:04:27.277 ' 00:04:27.277 12:02:28 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:27.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.277 --rc genhtml_branch_coverage=1 00:04:27.277 --rc genhtml_function_coverage=1 00:04:27.277 --rc genhtml_legend=1 00:04:27.277 --rc geninfo_all_blocks=1 00:04:27.277 --rc geninfo_unexecuted_blocks=1 00:04:27.277 00:04:27.277 ' 00:04:27.277 12:02:28 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:27.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.277 --rc genhtml_branch_coverage=1 00:04:27.277 --rc genhtml_function_coverage=1 00:04:27.277 --rc genhtml_legend=1 00:04:27.277 --rc geninfo_all_blocks=1 00:04:27.277 --rc geninfo_unexecuted_blocks=1 00:04:27.277 00:04:27.277 ' 00:04:27.277 12:02:28 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:27.277 12:02:28 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:27.277 12:02:28 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:27.277 12:02:28 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:27.277 12:02:28 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:27.277 12:02:28 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:27.277 12:02:28 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:27.277 12:02:28 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:27.277 12:02:28 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:27.277 12:02:28 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:27.277 12:02:28 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:27.277 12:02:28 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:27.277 12:02:28 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:fc777c45-39d9-4fee-b620-435140e95f34 00:04:27.277 12:02:28 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=fc777c45-39d9-4fee-b620-435140e95f34 00:04:27.277 12:02:28 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:27.277 12:02:28 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:27.277 12:02:28 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:27.277 12:02:28 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:27.277 12:02:28 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:27.277 12:02:28 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:27.277 12:02:28 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:27.277 12:02:28 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:27.277 12:02:28 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:27.277 12:02:28 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.277 12:02:28 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.277 12:02:28 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.277 12:02:28 json_config -- paths/export.sh@5 -- # export PATH 00:04:27.277 12:02:28 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.277 12:02:28 json_config -- nvmf/common.sh@51 -- # : 0 00:04:27.277 12:02:28 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:27.277 12:02:28 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:27.277 12:02:28 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:27.277 12:02:28 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:27.277 12:02:28 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:27.277 12:02:28 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:27.277 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:27.277 12:02:28 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:27.277 12:02:28 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:27.277 12:02:28 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:27.277 12:02:28 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:27.277 12:02:28 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:27.278 12:02:28 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:27.278 12:02:28 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:27.278 12:02:28 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:27.278 12:02:28 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:27.278 WARNING: No tests are enabled so not running JSON configuration tests 00:04:27.278 12:02:28 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:27.278 00:04:27.278 real 0m0.148s 00:04:27.278 user 0m0.096s 00:04:27.278 sys 0m0.052s 00:04:27.278 12:02:28 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:27.278 12:02:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:27.278 ************************************ 00:04:27.278 END TEST json_config 00:04:27.278 ************************************ 00:04:27.278 12:02:28 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:27.278 12:02:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:27.278 12:02:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.278 12:02:28 -- common/autotest_common.sh@10 -- # set +x 00:04:27.278 ************************************ 00:04:27.278 START TEST json_config_extra_key 00:04:27.278 ************************************ 00:04:27.278 12:02:28 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:27.555 12:02:28 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:27.555 12:02:28 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:04:27.555 12:02:28 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:27.555 12:02:28 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:27.555 12:02:28 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:27.555 12:02:28 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:27.555 12:02:28 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:27.555 12:02:28 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:27.555 12:02:28 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:27.555 12:02:28 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:27.555 12:02:28 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:27.555 12:02:28 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:27.555 12:02:28 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:27.555 12:02:28 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:27.555 12:02:28 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:27.555 12:02:28 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:27.555 12:02:28 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:27.555 12:02:28 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:27.555 12:02:28 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:27.555 12:02:28 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:27.555 12:02:28 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:27.555 12:02:28 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:27.555 12:02:28 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:27.555 12:02:28 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:27.555 12:02:28 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:27.555 12:02:28 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:27.555 12:02:28 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:27.555 12:02:28 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:27.555 12:02:28 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:27.555 12:02:28 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:27.555 12:02:28 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:27.555 12:02:28 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:27.555 12:02:28 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:27.555 12:02:28 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:27.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.555 --rc genhtml_branch_coverage=1 00:04:27.555 --rc genhtml_function_coverage=1 00:04:27.555 --rc genhtml_legend=1 00:04:27.555 --rc geninfo_all_blocks=1 00:04:27.555 --rc geninfo_unexecuted_blocks=1 00:04:27.555 00:04:27.555 ' 00:04:27.555 12:02:28 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:27.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.555 --rc genhtml_branch_coverage=1 00:04:27.555 --rc genhtml_function_coverage=1 00:04:27.555 --rc genhtml_legend=1 00:04:27.555 --rc geninfo_all_blocks=1 00:04:27.555 --rc geninfo_unexecuted_blocks=1 00:04:27.555 00:04:27.555 ' 00:04:27.555 12:02:28 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:27.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.555 --rc genhtml_branch_coverage=1 00:04:27.555 --rc genhtml_function_coverage=1 00:04:27.555 --rc genhtml_legend=1 00:04:27.555 --rc geninfo_all_blocks=1 00:04:27.555 --rc geninfo_unexecuted_blocks=1 00:04:27.555 00:04:27.555 ' 00:04:27.555 12:02:28 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:27.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.555 --rc genhtml_branch_coverage=1 00:04:27.555 --rc genhtml_function_coverage=1 00:04:27.555 --rc genhtml_legend=1 00:04:27.555 --rc geninfo_all_blocks=1 00:04:27.555 --rc geninfo_unexecuted_blocks=1 00:04:27.555 00:04:27.555 ' 00:04:27.555 12:02:28 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:27.555 12:02:28 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:27.555 12:02:28 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:27.555 12:02:28 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:27.555 12:02:28 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:27.555 12:02:28 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:27.555 12:02:28 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:27.555 12:02:28 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:27.555 12:02:28 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:27.555 12:02:28 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:27.555 12:02:28 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:27.555 12:02:28 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:27.555 12:02:28 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:fc777c45-39d9-4fee-b620-435140e95f34 00:04:27.555 12:02:28 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=fc777c45-39d9-4fee-b620-435140e95f34 00:04:27.555 12:02:28 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:27.555 12:02:28 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:27.555 12:02:28 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:27.555 12:02:28 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:27.556 12:02:28 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:27.556 12:02:28 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:27.556 12:02:28 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:27.556 12:02:28 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:27.556 12:02:28 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:27.556 12:02:28 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.556 12:02:28 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.556 12:02:28 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.556 12:02:28 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:27.556 12:02:28 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.556 12:02:28 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:27.556 12:02:28 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:27.556 12:02:28 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:27.556 12:02:28 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:27.556 12:02:28 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:27.556 12:02:28 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:27.556 12:02:28 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:27.556 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:27.556 12:02:28 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:27.556 12:02:28 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:27.556 12:02:28 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:27.556 12:02:28 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:27.556 12:02:28 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:27.556 12:02:28 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:27.556 12:02:28 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:27.556 12:02:28 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:27.556 12:02:28 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:27.556 12:02:28 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:27.556 12:02:28 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:27.556 12:02:28 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:27.556 12:02:28 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:27.556 INFO: launching applications... 00:04:27.556 12:02:28 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:27.556 12:02:28 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:27.556 12:02:28 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:27.556 12:02:28 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:27.556 12:02:28 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:27.556 12:02:28 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:27.556 12:02:28 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:27.556 12:02:28 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:27.556 12:02:28 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:27.556 12:02:28 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57844 00:04:27.556 12:02:28 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:27.556 Waiting for target to run... 00:04:27.556 12:02:28 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57844 /var/tmp/spdk_tgt.sock 00:04:27.556 12:02:28 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57844 ']' 00:04:27.556 12:02:28 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:27.556 12:02:28 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:27.556 12:02:28 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:27.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:27.556 12:02:28 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:27.556 12:02:28 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:27.556 12:02:28 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:27.556 [2024-11-25 12:02:28.559194] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:04:27.556 [2024-11-25 12:02:28.559421] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57844 ] 00:04:27.815 [2024-11-25 12:02:28.879631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.072 [2024-11-25 12:02:28.954155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.340 00:04:28.340 INFO: shutting down applications... 00:04:28.340 12:02:29 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:28.340 12:02:29 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:28.340 12:02:29 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:28.340 12:02:29 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:28.340 12:02:29 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:28.340 12:02:29 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:28.340 12:02:29 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:28.340 12:02:29 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57844 ]] 00:04:28.340 12:02:29 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57844 00:04:28.340 12:02:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:28.340 12:02:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:28.340 12:02:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57844 00:04:28.340 12:02:29 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:28.906 12:02:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:28.906 12:02:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:28.906 12:02:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57844 00:04:28.906 12:02:29 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:29.472 12:02:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:29.473 12:02:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:29.473 12:02:30 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57844 00:04:29.473 12:02:30 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:30.045 12:02:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:30.045 12:02:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:30.045 12:02:30 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57844 00:04:30.045 12:02:30 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:30.045 12:02:30 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:30.045 12:02:30 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:30.045 SPDK target shutdown done 00:04:30.045 Success 00:04:30.045 12:02:30 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:30.045 12:02:30 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:30.045 00:04:30.045 real 0m2.539s 00:04:30.045 user 0m2.260s 00:04:30.045 sys 0m0.401s 00:04:30.045 12:02:30 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:30.045 ************************************ 00:04:30.045 END TEST json_config_extra_key 00:04:30.045 ************************************ 00:04:30.045 12:02:30 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:30.045 12:02:30 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:30.045 12:02:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:30.045 12:02:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.045 12:02:30 -- common/autotest_common.sh@10 -- # set +x 00:04:30.045 ************************************ 00:04:30.045 START TEST alias_rpc 00:04:30.045 ************************************ 00:04:30.045 12:02:30 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:30.045 * Looking for test storage... 00:04:30.045 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:30.045 12:02:31 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:30.045 12:02:31 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:30.045 12:02:31 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:30.045 12:02:31 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:30.045 12:02:31 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:30.045 12:02:31 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:30.045 12:02:31 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:30.045 12:02:31 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:30.045 12:02:31 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:30.045 12:02:31 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:30.045 12:02:31 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:30.045 12:02:31 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:30.045 12:02:31 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:30.045 12:02:31 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:30.045 12:02:31 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:30.045 12:02:31 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:30.045 12:02:31 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:30.045 12:02:31 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:30.045 12:02:31 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:30.045 12:02:31 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:30.045 12:02:31 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:30.045 12:02:31 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:30.045 12:02:31 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:30.045 12:02:31 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:30.045 12:02:31 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:30.045 12:02:31 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:30.045 12:02:31 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:30.045 12:02:31 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:30.045 12:02:31 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:30.045 12:02:31 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:30.045 12:02:31 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:30.045 12:02:31 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:30.045 12:02:31 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:30.045 12:02:31 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:30.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.045 --rc genhtml_branch_coverage=1 00:04:30.045 --rc genhtml_function_coverage=1 00:04:30.045 --rc genhtml_legend=1 00:04:30.045 --rc geninfo_all_blocks=1 00:04:30.045 --rc geninfo_unexecuted_blocks=1 00:04:30.045 00:04:30.045 ' 00:04:30.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:30.046 12:02:31 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:30.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.046 --rc genhtml_branch_coverage=1 00:04:30.046 --rc genhtml_function_coverage=1 00:04:30.046 --rc genhtml_legend=1 00:04:30.046 --rc geninfo_all_blocks=1 00:04:30.046 --rc geninfo_unexecuted_blocks=1 00:04:30.046 00:04:30.046 ' 00:04:30.046 12:02:31 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:30.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.046 --rc genhtml_branch_coverage=1 00:04:30.046 --rc genhtml_function_coverage=1 00:04:30.046 --rc genhtml_legend=1 00:04:30.046 --rc geninfo_all_blocks=1 00:04:30.046 --rc geninfo_unexecuted_blocks=1 00:04:30.046 00:04:30.046 ' 00:04:30.046 12:02:31 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:30.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.046 --rc genhtml_branch_coverage=1 00:04:30.046 --rc genhtml_function_coverage=1 00:04:30.046 --rc genhtml_legend=1 00:04:30.046 --rc geninfo_all_blocks=1 00:04:30.046 --rc geninfo_unexecuted_blocks=1 00:04:30.046 00:04:30.046 ' 00:04:30.046 12:02:31 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:30.046 12:02:31 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57936 00:04:30.046 12:02:31 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57936 00:04:30.046 12:02:31 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57936 ']' 00:04:30.046 12:02:31 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:30.046 12:02:31 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:30.046 12:02:31 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:30.046 12:02:31 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:30.046 12:02:31 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:30.046 12:02:31 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.305 [2024-11-25 12:02:31.163524] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:04:30.305 [2024-11-25 12:02:31.163648] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57936 ] 00:04:30.305 [2024-11-25 12:02:31.320987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.562 [2024-11-25 12:02:31.405253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.129 12:02:31 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:31.129 12:02:31 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:31.129 12:02:31 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:31.387 12:02:32 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57936 00:04:31.387 12:02:32 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57936 ']' 00:04:31.387 12:02:32 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57936 00:04:31.387 12:02:32 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:31.387 12:02:32 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:31.387 12:02:32 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57936 00:04:31.387 killing process with pid 57936 00:04:31.387 12:02:32 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:31.387 12:02:32 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:31.387 12:02:32 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57936' 00:04:31.387 12:02:32 alias_rpc -- common/autotest_common.sh@973 -- # kill 57936 00:04:31.387 12:02:32 alias_rpc -- common/autotest_common.sh@978 -- # wait 57936 00:04:32.762 ************************************ 00:04:32.762 END TEST alias_rpc 00:04:32.762 ************************************ 00:04:32.762 00:04:32.762 real 0m2.510s 00:04:32.762 user 0m2.628s 00:04:32.762 sys 0m0.383s 00:04:32.762 12:02:33 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.762 12:02:33 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.762 12:02:33 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:32.762 12:02:33 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:32.762 12:02:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.762 12:02:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.762 12:02:33 -- common/autotest_common.sh@10 -- # set +x 00:04:32.762 ************************************ 00:04:32.762 START TEST spdkcli_tcp 00:04:32.762 ************************************ 00:04:32.762 12:02:33 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:32.762 * Looking for test storage... 00:04:32.762 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:32.762 12:02:33 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:32.762 12:02:33 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:32.762 12:02:33 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:32.762 12:02:33 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:32.762 12:02:33 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:32.762 12:02:33 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:32.762 12:02:33 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:32.762 12:02:33 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:32.762 12:02:33 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:32.762 12:02:33 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:32.762 12:02:33 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:32.762 12:02:33 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:32.762 12:02:33 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:32.762 12:02:33 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:32.762 12:02:33 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:32.762 12:02:33 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:32.762 12:02:33 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:32.762 12:02:33 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:32.762 12:02:33 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:32.762 12:02:33 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:32.762 12:02:33 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:32.762 12:02:33 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:32.762 12:02:33 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:32.762 12:02:33 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:32.762 12:02:33 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:32.762 12:02:33 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:32.762 12:02:33 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:32.762 12:02:33 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:32.762 12:02:33 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:32.762 12:02:33 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:32.762 12:02:33 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:32.762 12:02:33 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:32.762 12:02:33 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:32.762 12:02:33 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:32.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.762 --rc genhtml_branch_coverage=1 00:04:32.762 --rc genhtml_function_coverage=1 00:04:32.762 --rc genhtml_legend=1 00:04:32.762 --rc geninfo_all_blocks=1 00:04:32.762 --rc geninfo_unexecuted_blocks=1 00:04:32.762 00:04:32.762 ' 00:04:32.762 12:02:33 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:32.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.762 --rc genhtml_branch_coverage=1 00:04:32.762 --rc genhtml_function_coverage=1 00:04:32.762 --rc genhtml_legend=1 00:04:32.762 --rc geninfo_all_blocks=1 00:04:32.762 --rc geninfo_unexecuted_blocks=1 00:04:32.762 00:04:32.762 ' 00:04:32.762 12:02:33 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:32.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.762 --rc genhtml_branch_coverage=1 00:04:32.762 --rc genhtml_function_coverage=1 00:04:32.762 --rc genhtml_legend=1 00:04:32.762 --rc geninfo_all_blocks=1 00:04:32.762 --rc geninfo_unexecuted_blocks=1 00:04:32.762 00:04:32.762 ' 00:04:32.762 12:02:33 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:32.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.762 --rc genhtml_branch_coverage=1 00:04:32.762 --rc genhtml_function_coverage=1 00:04:32.762 --rc genhtml_legend=1 00:04:32.762 --rc geninfo_all_blocks=1 00:04:32.762 --rc geninfo_unexecuted_blocks=1 00:04:32.762 00:04:32.762 ' 00:04:32.762 12:02:33 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:32.762 12:02:33 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:32.762 12:02:33 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:32.762 12:02:33 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:32.762 12:02:33 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:32.762 12:02:33 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:32.762 12:02:33 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:32.762 12:02:33 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:32.762 12:02:33 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:32.762 12:02:33 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58021 00:04:32.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:32.762 12:02:33 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58021 00:04:32.762 12:02:33 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 58021 ']' 00:04:32.762 12:02:33 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:32.762 12:02:33 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:32.762 12:02:33 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:32.762 12:02:33 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:32.762 12:02:33 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:32.762 12:02:33 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:32.762 [2024-11-25 12:02:33.742863] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:04:32.762 [2024-11-25 12:02:33.743002] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58021 ] 00:04:33.023 [2024-11-25 12:02:33.904298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:33.023 [2024-11-25 12:02:34.019481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:33.023 [2024-11-25 12:02:34.019578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.593 12:02:34 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:33.593 12:02:34 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:33.593 12:02:34 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58038 00:04:33.593 12:02:34 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:33.593 12:02:34 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:33.856 [ 00:04:33.856 "bdev_malloc_delete", 00:04:33.856 "bdev_malloc_create", 00:04:33.856 "bdev_null_resize", 00:04:33.856 "bdev_null_delete", 00:04:33.856 "bdev_null_create", 00:04:33.856 "bdev_nvme_cuse_unregister", 00:04:33.856 "bdev_nvme_cuse_register", 00:04:33.856 "bdev_opal_new_user", 00:04:33.856 "bdev_opal_set_lock_state", 00:04:33.856 "bdev_opal_delete", 00:04:33.856 "bdev_opal_get_info", 00:04:33.856 "bdev_opal_create", 00:04:33.856 "bdev_nvme_opal_revert", 00:04:33.856 "bdev_nvme_opal_init", 00:04:33.856 "bdev_nvme_send_cmd", 00:04:33.856 "bdev_nvme_set_keys", 00:04:33.856 "bdev_nvme_get_path_iostat", 00:04:33.856 "bdev_nvme_get_mdns_discovery_info", 00:04:33.856 "bdev_nvme_stop_mdns_discovery", 00:04:33.856 "bdev_nvme_start_mdns_discovery", 00:04:33.856 "bdev_nvme_set_multipath_policy", 00:04:33.856 "bdev_nvme_set_preferred_path", 00:04:33.856 "bdev_nvme_get_io_paths", 00:04:33.856 "bdev_nvme_remove_error_injection", 00:04:33.856 "bdev_nvme_add_error_injection", 00:04:33.856 "bdev_nvme_get_discovery_info", 00:04:33.856 "bdev_nvme_stop_discovery", 00:04:33.856 "bdev_nvme_start_discovery", 00:04:33.856 "bdev_nvme_get_controller_health_info", 00:04:33.856 "bdev_nvme_disable_controller", 00:04:33.856 "bdev_nvme_enable_controller", 00:04:33.856 "bdev_nvme_reset_controller", 00:04:33.856 "bdev_nvme_get_transport_statistics", 00:04:33.856 "bdev_nvme_apply_firmware", 00:04:33.856 "bdev_nvme_detach_controller", 00:04:33.856 "bdev_nvme_get_controllers", 00:04:33.856 "bdev_nvme_attach_controller", 00:04:33.856 "bdev_nvme_set_hotplug", 00:04:33.856 "bdev_nvme_set_options", 00:04:33.856 "bdev_passthru_delete", 00:04:33.856 "bdev_passthru_create", 00:04:33.856 "bdev_lvol_set_parent_bdev", 00:04:33.857 "bdev_lvol_set_parent", 00:04:33.857 "bdev_lvol_check_shallow_copy", 00:04:33.857 "bdev_lvol_start_shallow_copy", 00:04:33.857 "bdev_lvol_grow_lvstore", 00:04:33.857 "bdev_lvol_get_lvols", 00:04:33.857 "bdev_lvol_get_lvstores", 00:04:33.857 "bdev_lvol_delete", 00:04:33.857 "bdev_lvol_set_read_only", 00:04:33.857 "bdev_lvol_resize", 00:04:33.857 "bdev_lvol_decouple_parent", 00:04:33.857 "bdev_lvol_inflate", 00:04:33.857 "bdev_lvol_rename", 00:04:33.857 "bdev_lvol_clone_bdev", 00:04:33.857 "bdev_lvol_clone", 00:04:33.857 "bdev_lvol_snapshot", 00:04:33.857 "bdev_lvol_create", 00:04:33.857 "bdev_lvol_delete_lvstore", 00:04:33.857 "bdev_lvol_rename_lvstore", 00:04:33.857 "bdev_lvol_create_lvstore", 00:04:33.857 "bdev_raid_set_options", 00:04:33.857 "bdev_raid_remove_base_bdev", 00:04:33.857 "bdev_raid_add_base_bdev", 00:04:33.857 "bdev_raid_delete", 00:04:33.857 "bdev_raid_create", 00:04:33.857 "bdev_raid_get_bdevs", 00:04:33.857 "bdev_error_inject_error", 00:04:33.857 "bdev_error_delete", 00:04:33.857 "bdev_error_create", 00:04:33.857 "bdev_split_delete", 00:04:33.857 "bdev_split_create", 00:04:33.857 "bdev_delay_delete", 00:04:33.857 "bdev_delay_create", 00:04:33.857 "bdev_delay_update_latency", 00:04:33.857 "bdev_zone_block_delete", 00:04:33.857 "bdev_zone_block_create", 00:04:33.857 "blobfs_create", 00:04:33.857 "blobfs_detect", 00:04:33.857 "blobfs_set_cache_size", 00:04:33.857 "bdev_xnvme_delete", 00:04:33.857 "bdev_xnvme_create", 00:04:33.857 "bdev_aio_delete", 00:04:33.857 "bdev_aio_rescan", 00:04:33.857 "bdev_aio_create", 00:04:33.857 "bdev_ftl_set_property", 00:04:33.857 "bdev_ftl_get_properties", 00:04:33.857 "bdev_ftl_get_stats", 00:04:33.857 "bdev_ftl_unmap", 00:04:33.857 "bdev_ftl_unload", 00:04:33.857 "bdev_ftl_delete", 00:04:33.857 "bdev_ftl_load", 00:04:33.857 "bdev_ftl_create", 00:04:33.857 "bdev_virtio_attach_controller", 00:04:33.857 "bdev_virtio_scsi_get_devices", 00:04:33.857 "bdev_virtio_detach_controller", 00:04:33.857 "bdev_virtio_blk_set_hotplug", 00:04:33.857 "bdev_iscsi_delete", 00:04:33.857 "bdev_iscsi_create", 00:04:33.857 "bdev_iscsi_set_options", 00:04:33.857 "accel_error_inject_error", 00:04:33.857 "ioat_scan_accel_module", 00:04:33.857 "dsa_scan_accel_module", 00:04:33.857 "iaa_scan_accel_module", 00:04:33.857 "keyring_file_remove_key", 00:04:33.857 "keyring_file_add_key", 00:04:33.857 "keyring_linux_set_options", 00:04:33.857 "fsdev_aio_delete", 00:04:33.857 "fsdev_aio_create", 00:04:33.857 "iscsi_get_histogram", 00:04:33.857 "iscsi_enable_histogram", 00:04:33.857 "iscsi_set_options", 00:04:33.857 "iscsi_get_auth_groups", 00:04:33.857 "iscsi_auth_group_remove_secret", 00:04:33.857 "iscsi_auth_group_add_secret", 00:04:33.857 "iscsi_delete_auth_group", 00:04:33.857 "iscsi_create_auth_group", 00:04:33.857 "iscsi_set_discovery_auth", 00:04:33.857 "iscsi_get_options", 00:04:33.857 "iscsi_target_node_request_logout", 00:04:33.857 "iscsi_target_node_set_redirect", 00:04:33.857 "iscsi_target_node_set_auth", 00:04:33.857 "iscsi_target_node_add_lun", 00:04:33.857 "iscsi_get_stats", 00:04:33.857 "iscsi_get_connections", 00:04:33.857 "iscsi_portal_group_set_auth", 00:04:33.857 "iscsi_start_portal_group", 00:04:33.857 "iscsi_delete_portal_group", 00:04:33.857 "iscsi_create_portal_group", 00:04:33.857 "iscsi_get_portal_groups", 00:04:33.857 "iscsi_delete_target_node", 00:04:33.857 "iscsi_target_node_remove_pg_ig_maps", 00:04:33.857 "iscsi_target_node_add_pg_ig_maps", 00:04:33.857 "iscsi_create_target_node", 00:04:33.857 "iscsi_get_target_nodes", 00:04:33.857 "iscsi_delete_initiator_group", 00:04:33.857 "iscsi_initiator_group_remove_initiators", 00:04:33.857 "iscsi_initiator_group_add_initiators", 00:04:33.857 "iscsi_create_initiator_group", 00:04:33.857 "iscsi_get_initiator_groups", 00:04:33.857 "nvmf_set_crdt", 00:04:33.857 "nvmf_set_config", 00:04:33.857 "nvmf_set_max_subsystems", 00:04:33.857 "nvmf_stop_mdns_prr", 00:04:33.857 "nvmf_publish_mdns_prr", 00:04:33.857 "nvmf_subsystem_get_listeners", 00:04:33.857 "nvmf_subsystem_get_qpairs", 00:04:33.857 "nvmf_subsystem_get_controllers", 00:04:33.857 "nvmf_get_stats", 00:04:33.857 "nvmf_get_transports", 00:04:33.857 "nvmf_create_transport", 00:04:33.857 "nvmf_get_targets", 00:04:33.857 "nvmf_delete_target", 00:04:33.857 "nvmf_create_target", 00:04:33.857 "nvmf_subsystem_allow_any_host", 00:04:33.857 "nvmf_subsystem_set_keys", 00:04:33.857 "nvmf_subsystem_remove_host", 00:04:33.857 "nvmf_subsystem_add_host", 00:04:33.857 "nvmf_ns_remove_host", 00:04:33.857 "nvmf_ns_add_host", 00:04:33.857 "nvmf_subsystem_remove_ns", 00:04:33.857 "nvmf_subsystem_set_ns_ana_group", 00:04:33.857 "nvmf_subsystem_add_ns", 00:04:33.857 "nvmf_subsystem_listener_set_ana_state", 00:04:33.857 "nvmf_discovery_get_referrals", 00:04:33.857 "nvmf_discovery_remove_referral", 00:04:33.857 "nvmf_discovery_add_referral", 00:04:33.857 "nvmf_subsystem_remove_listener", 00:04:33.857 "nvmf_subsystem_add_listener", 00:04:33.857 "nvmf_delete_subsystem", 00:04:33.857 "nvmf_create_subsystem", 00:04:33.857 "nvmf_get_subsystems", 00:04:33.857 "env_dpdk_get_mem_stats", 00:04:33.857 "nbd_get_disks", 00:04:33.857 "nbd_stop_disk", 00:04:33.857 "nbd_start_disk", 00:04:33.857 "ublk_recover_disk", 00:04:33.857 "ublk_get_disks", 00:04:33.857 "ublk_stop_disk", 00:04:33.857 "ublk_start_disk", 00:04:33.857 "ublk_destroy_target", 00:04:33.857 "ublk_create_target", 00:04:33.857 "virtio_blk_create_transport", 00:04:33.857 "virtio_blk_get_transports", 00:04:33.857 "vhost_controller_set_coalescing", 00:04:33.857 "vhost_get_controllers", 00:04:33.857 "vhost_delete_controller", 00:04:33.857 "vhost_create_blk_controller", 00:04:33.857 "vhost_scsi_controller_remove_target", 00:04:33.857 "vhost_scsi_controller_add_target", 00:04:33.857 "vhost_start_scsi_controller", 00:04:33.857 "vhost_create_scsi_controller", 00:04:33.857 "thread_set_cpumask", 00:04:33.857 "scheduler_set_options", 00:04:33.857 "framework_get_governor", 00:04:33.857 "framework_get_scheduler", 00:04:33.857 "framework_set_scheduler", 00:04:33.857 "framework_get_reactors", 00:04:33.857 "thread_get_io_channels", 00:04:33.857 "thread_get_pollers", 00:04:33.857 "thread_get_stats", 00:04:33.857 "framework_monitor_context_switch", 00:04:33.857 "spdk_kill_instance", 00:04:33.857 "log_enable_timestamps", 00:04:33.857 "log_get_flags", 00:04:33.857 "log_clear_flag", 00:04:33.857 "log_set_flag", 00:04:33.857 "log_get_level", 00:04:33.857 "log_set_level", 00:04:33.857 "log_get_print_level", 00:04:33.857 "log_set_print_level", 00:04:33.857 "framework_enable_cpumask_locks", 00:04:33.857 "framework_disable_cpumask_locks", 00:04:33.857 "framework_wait_init", 00:04:33.857 "framework_start_init", 00:04:33.857 "scsi_get_devices", 00:04:33.857 "bdev_get_histogram", 00:04:33.857 "bdev_enable_histogram", 00:04:33.857 "bdev_set_qos_limit", 00:04:33.857 "bdev_set_qd_sampling_period", 00:04:33.857 "bdev_get_bdevs", 00:04:33.857 "bdev_reset_iostat", 00:04:33.857 "bdev_get_iostat", 00:04:33.857 "bdev_examine", 00:04:33.857 "bdev_wait_for_examine", 00:04:33.857 "bdev_set_options", 00:04:33.857 "accel_get_stats", 00:04:33.857 "accel_set_options", 00:04:33.857 "accel_set_driver", 00:04:33.857 "accel_crypto_key_destroy", 00:04:33.857 "accel_crypto_keys_get", 00:04:33.857 "accel_crypto_key_create", 00:04:33.857 "accel_assign_opc", 00:04:33.857 "accel_get_module_info", 00:04:33.857 "accel_get_opc_assignments", 00:04:33.857 "vmd_rescan", 00:04:33.857 "vmd_remove_device", 00:04:33.857 "vmd_enable", 00:04:33.857 "sock_get_default_impl", 00:04:33.857 "sock_set_default_impl", 00:04:33.857 "sock_impl_set_options", 00:04:33.857 "sock_impl_get_options", 00:04:33.857 "iobuf_get_stats", 00:04:33.857 "iobuf_set_options", 00:04:33.857 "keyring_get_keys", 00:04:33.857 "framework_get_pci_devices", 00:04:33.857 "framework_get_config", 00:04:33.857 "framework_get_subsystems", 00:04:33.857 "fsdev_set_opts", 00:04:33.857 "fsdev_get_opts", 00:04:33.857 "trace_get_info", 00:04:33.857 "trace_get_tpoint_group_mask", 00:04:33.857 "trace_disable_tpoint_group", 00:04:33.857 "trace_enable_tpoint_group", 00:04:33.857 "trace_clear_tpoint_mask", 00:04:33.857 "trace_set_tpoint_mask", 00:04:33.857 "notify_get_notifications", 00:04:33.857 "notify_get_types", 00:04:33.857 "spdk_get_version", 00:04:33.857 "rpc_get_methods" 00:04:33.857 ] 00:04:33.857 12:02:34 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:33.857 12:02:34 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:33.857 12:02:34 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:33.857 12:02:34 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:33.857 12:02:34 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58021 00:04:33.857 12:02:34 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 58021 ']' 00:04:33.857 12:02:34 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 58021 00:04:33.857 12:02:34 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:33.857 12:02:34 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:33.857 12:02:34 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58021 00:04:33.858 killing process with pid 58021 00:04:33.858 12:02:34 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:33.858 12:02:34 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:33.858 12:02:34 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58021' 00:04:33.858 12:02:34 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 58021 00:04:33.858 12:02:34 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 58021 00:04:35.772 ************************************ 00:04:35.772 END TEST spdkcli_tcp 00:04:35.773 ************************************ 00:04:35.773 00:04:35.773 real 0m3.056s 00:04:35.773 user 0m5.388s 00:04:35.773 sys 0m0.506s 00:04:35.773 12:02:36 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:35.773 12:02:36 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:35.773 12:02:36 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:35.773 12:02:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:35.773 12:02:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:35.773 12:02:36 -- common/autotest_common.sh@10 -- # set +x 00:04:35.773 ************************************ 00:04:35.773 START TEST dpdk_mem_utility 00:04:35.773 ************************************ 00:04:35.773 12:02:36 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:35.773 * Looking for test storage... 00:04:35.773 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:35.773 12:02:36 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:35.773 12:02:36 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:04:35.773 12:02:36 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:35.773 12:02:36 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:35.773 12:02:36 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:35.773 12:02:36 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:35.773 12:02:36 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:35.773 12:02:36 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:35.773 12:02:36 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:35.773 12:02:36 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:35.773 12:02:36 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:35.773 12:02:36 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:35.773 12:02:36 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:35.773 12:02:36 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:35.773 12:02:36 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:35.773 12:02:36 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:35.773 12:02:36 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:35.773 12:02:36 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:35.773 12:02:36 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:35.773 12:02:36 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:35.773 12:02:36 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:35.773 12:02:36 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:35.773 12:02:36 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:35.773 12:02:36 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:35.773 12:02:36 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:35.773 12:02:36 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:35.773 12:02:36 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:35.773 12:02:36 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:35.773 12:02:36 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:35.773 12:02:36 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:35.773 12:02:36 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:35.773 12:02:36 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:35.773 12:02:36 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:35.773 12:02:36 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:35.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.773 --rc genhtml_branch_coverage=1 00:04:35.773 --rc genhtml_function_coverage=1 00:04:35.773 --rc genhtml_legend=1 00:04:35.773 --rc geninfo_all_blocks=1 00:04:35.773 --rc geninfo_unexecuted_blocks=1 00:04:35.773 00:04:35.773 ' 00:04:35.773 12:02:36 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:35.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.773 --rc genhtml_branch_coverage=1 00:04:35.773 --rc genhtml_function_coverage=1 00:04:35.773 --rc genhtml_legend=1 00:04:35.773 --rc geninfo_all_blocks=1 00:04:35.773 --rc geninfo_unexecuted_blocks=1 00:04:35.773 00:04:35.773 ' 00:04:35.773 12:02:36 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:35.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.773 --rc genhtml_branch_coverage=1 00:04:35.773 --rc genhtml_function_coverage=1 00:04:35.773 --rc genhtml_legend=1 00:04:35.773 --rc geninfo_all_blocks=1 00:04:35.773 --rc geninfo_unexecuted_blocks=1 00:04:35.773 00:04:35.773 ' 00:04:35.773 12:02:36 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:35.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.773 --rc genhtml_branch_coverage=1 00:04:35.773 --rc genhtml_function_coverage=1 00:04:35.773 --rc genhtml_legend=1 00:04:35.773 --rc geninfo_all_blocks=1 00:04:35.773 --rc geninfo_unexecuted_blocks=1 00:04:35.773 00:04:35.773 ' 00:04:35.773 12:02:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:35.773 12:02:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58132 00:04:35.773 12:02:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58132 00:04:35.773 12:02:36 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58132 ']' 00:04:35.773 12:02:36 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:35.773 12:02:36 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:35.773 12:02:36 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:35.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:35.773 12:02:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:35.773 12:02:36 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:35.773 12:02:36 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:36.036 [2024-11-25 12:02:36.881338] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:04:36.036 [2024-11-25 12:02:36.881471] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58132 ] 00:04:36.036 [2024-11-25 12:02:37.040643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.332 [2024-11-25 12:02:37.147534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.906 12:02:37 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:36.906 12:02:37 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:36.906 12:02:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:36.906 12:02:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:36.906 12:02:37 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:36.906 12:02:37 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:36.906 { 00:04:36.906 "filename": "/tmp/spdk_mem_dump.txt" 00:04:36.906 } 00:04:36.906 12:02:37 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:36.906 12:02:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:36.906 DPDK memory size 816.000000 MiB in 1 heap(s) 00:04:36.906 1 heaps totaling size 816.000000 MiB 00:04:36.906 size: 816.000000 MiB heap id: 0 00:04:36.906 end heaps---------- 00:04:36.906 9 mempools totaling size 595.772034 MiB 00:04:36.906 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:36.906 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:36.906 size: 92.545471 MiB name: bdev_io_58132 00:04:36.906 size: 50.003479 MiB name: msgpool_58132 00:04:36.906 size: 36.509338 MiB name: fsdev_io_58132 00:04:36.906 size: 21.763794 MiB name: PDU_Pool 00:04:36.906 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:36.906 size: 4.133484 MiB name: evtpool_58132 00:04:36.906 size: 0.026123 MiB name: Session_Pool 00:04:36.906 end mempools------- 00:04:36.906 6 memzones totaling size 4.142822 MiB 00:04:36.906 size: 1.000366 MiB name: RG_ring_0_58132 00:04:36.906 size: 1.000366 MiB name: RG_ring_1_58132 00:04:36.906 size: 1.000366 MiB name: RG_ring_4_58132 00:04:36.906 size: 1.000366 MiB name: RG_ring_5_58132 00:04:36.906 size: 0.125366 MiB name: RG_ring_2_58132 00:04:36.906 size: 0.015991 MiB name: RG_ring_3_58132 00:04:36.906 end memzones------- 00:04:36.906 12:02:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:36.906 heap id: 0 total size: 816.000000 MiB number of busy elements: 324 number of free elements: 18 00:04:36.906 list of free elements. size: 16.789185 MiB 00:04:36.906 element at address: 0x200006400000 with size: 1.995972 MiB 00:04:36.906 element at address: 0x20000a600000 with size: 1.995972 MiB 00:04:36.906 element at address: 0x200003e00000 with size: 1.991028 MiB 00:04:36.906 element at address: 0x200018d00040 with size: 0.999939 MiB 00:04:36.906 element at address: 0x200019100040 with size: 0.999939 MiB 00:04:36.906 element at address: 0x200019200000 with size: 0.999084 MiB 00:04:36.906 element at address: 0x200031e00000 with size: 0.994324 MiB 00:04:36.906 element at address: 0x200000400000 with size: 0.992004 MiB 00:04:36.906 element at address: 0x200018a00000 with size: 0.959656 MiB 00:04:36.906 element at address: 0x200019500040 with size: 0.936401 MiB 00:04:36.906 element at address: 0x200000200000 with size: 0.716980 MiB 00:04:36.906 element at address: 0x20001ac00000 with size: 0.559753 MiB 00:04:36.906 element at address: 0x200000c00000 with size: 0.490173 MiB 00:04:36.906 element at address: 0x200018e00000 with size: 0.487976 MiB 00:04:36.906 element at address: 0x200019600000 with size: 0.485413 MiB 00:04:36.906 element at address: 0x200012c00000 with size: 0.443237 MiB 00:04:36.906 element at address: 0x200028000000 with size: 0.390442 MiB 00:04:36.906 element at address: 0x200000800000 with size: 0.350891 MiB 00:04:36.906 list of standard malloc elements. size: 199.289917 MiB 00:04:36.906 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:04:36.906 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:04:36.906 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:04:36.906 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:04:36.906 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:04:36.906 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:04:36.906 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:04:36.906 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:04:36.906 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:04:36.906 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:04:36.906 element at address: 0x200012bff040 with size: 0.000305 MiB 00:04:36.906 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:04:36.906 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:04:36.906 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:04:36.906 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:04:36.906 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:04:36.906 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:04:36.906 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:04:36.906 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:04:36.906 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:04:36.906 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:04:36.906 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:04:36.906 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:04:36.906 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:04:36.906 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:04:36.906 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:04:36.906 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:04:36.906 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:04:36.906 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:04:36.906 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:04:36.906 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:04:36.906 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:04:36.906 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:04:36.906 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:04:36.906 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:04:36.906 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:04:36.906 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:04:36.906 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:04:36.906 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:04:36.906 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:04:36.906 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:04:36.906 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:04:36.906 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:04:36.906 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:04:36.906 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:04:36.906 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:04:36.906 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:04:36.906 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:04:36.906 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:04:36.906 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:04:36.906 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:04:36.906 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:04:36.906 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:04:36.906 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:04:36.906 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:04:36.906 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:04:36.906 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:04:36.906 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:04:36.906 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:04:36.906 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:04:36.906 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:04:36.906 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:04:36.906 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:04:36.906 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:04:36.906 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:04:36.906 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:04:36.906 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:04:36.906 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:04:36.906 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:04:36.906 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:04:36.906 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:04:36.906 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:04:36.906 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:04:36.906 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:04:36.906 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:04:36.906 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:04:36.906 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:04:36.906 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:04:36.906 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:04:36.906 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:04:36.906 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:04:36.906 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:04:36.906 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:04:36.906 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:04:36.906 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:04:36.906 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:04:36.906 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:04:36.906 element at address: 0x200000cff000 with size: 0.000244 MiB 00:04:36.906 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:04:36.906 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:04:36.906 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:04:36.906 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:04:36.906 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:04:36.906 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:04:36.906 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:04:36.906 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:04:36.906 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:04:36.907 element at address: 0x200012bff180 with size: 0.000244 MiB 00:04:36.907 element at address: 0x200012bff280 with size: 0.000244 MiB 00:04:36.907 element at address: 0x200012bff380 with size: 0.000244 MiB 00:04:36.907 element at address: 0x200012bff480 with size: 0.000244 MiB 00:04:36.907 element at address: 0x200012bff580 with size: 0.000244 MiB 00:04:36.907 element at address: 0x200012bff680 with size: 0.000244 MiB 00:04:36.907 element at address: 0x200012bff780 with size: 0.000244 MiB 00:04:36.907 element at address: 0x200012bff880 with size: 0.000244 MiB 00:04:36.907 element at address: 0x200012bff980 with size: 0.000244 MiB 00:04:36.907 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:04:36.907 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:04:36.907 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:04:36.907 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:04:36.907 element at address: 0x200012c71780 with size: 0.000244 MiB 00:04:36.907 element at address: 0x200012c71880 with size: 0.000244 MiB 00:04:36.907 element at address: 0x200012c71980 with size: 0.000244 MiB 00:04:36.907 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:04:36.907 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:04:36.907 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:04:36.907 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:04:36.907 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:04:36.907 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:04:36.907 element at address: 0x200012c72080 with size: 0.000244 MiB 00:04:36.907 element at address: 0x200012c72180 with size: 0.000244 MiB 00:04:36.907 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:04:36.907 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:04:36.907 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:04:36.907 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac8f4c0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac8f5c0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac8f6c0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac8f7c0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac8f8c0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac8f9c0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac8fac0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac8fbc0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac8fcc0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac8fdc0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac8fec0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:04:36.907 element at address: 0x200028063f40 with size: 0.000244 MiB 00:04:36.907 element at address: 0x200028064040 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20002806af80 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20002806b080 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20002806b180 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20002806b280 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20002806b380 with size: 0.000244 MiB 00:04:36.907 element at address: 0x20002806b480 with size: 0.000244 MiB 00:04:36.908 element at address: 0x20002806b580 with size: 0.000244 MiB 00:04:36.908 element at address: 0x20002806b680 with size: 0.000244 MiB 00:04:36.908 element at address: 0x20002806b780 with size: 0.000244 MiB 00:04:36.908 element at address: 0x20002806b880 with size: 0.000244 MiB 00:04:36.908 element at address: 0x20002806b980 with size: 0.000244 MiB 00:04:36.908 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:04:36.908 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:04:36.908 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:04:36.908 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:04:36.908 element at address: 0x20002806be80 with size: 0.000244 MiB 00:04:36.908 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:04:36.908 element at address: 0x20002806c080 with size: 0.000244 MiB 00:04:36.908 element at address: 0x20002806c180 with size: 0.000244 MiB 00:04:36.908 element at address: 0x20002806c280 with size: 0.000244 MiB 00:04:36.908 element at address: 0x20002806c380 with size: 0.000244 MiB 00:04:36.908 element at address: 0x20002806c480 with size: 0.000244 MiB 00:04:36.908 element at address: 0x20002806c580 with size: 0.000244 MiB 00:04:36.908 element at address: 0x20002806c680 with size: 0.000244 MiB 00:04:36.908 element at address: 0x20002806c780 with size: 0.000244 MiB 00:04:36.908 element at address: 0x20002806c880 with size: 0.000244 MiB 00:04:36.908 element at address: 0x20002806c980 with size: 0.000244 MiB 00:04:36.908 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:04:36.908 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:04:36.908 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:04:36.908 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:04:36.908 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:04:36.908 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:04:36.908 element at address: 0x20002806d080 with size: 0.000244 MiB 00:04:36.908 element at address: 0x20002806d180 with size: 0.000244 MiB 00:04:36.908 element at address: 0x20002806d280 with size: 0.000244 MiB 00:04:36.908 element at address: 0x20002806d380 with size: 0.000244 MiB 00:04:36.908 element at address: 0x20002806d480 with size: 0.000244 MiB 00:04:36.908 element at address: 0x20002806d580 with size: 0.000244 MiB 00:04:36.908 element at address: 0x20002806d680 with size: 0.000244 MiB 00:04:36.908 element at address: 0x20002806d780 with size: 0.000244 MiB 00:04:36.908 element at address: 0x20002806d880 with size: 0.000244 MiB 00:04:36.908 element at address: 0x20002806d980 with size: 0.000244 MiB 00:04:36.908 element at address: 0x20002806da80 with size: 0.000244 MiB 00:04:36.908 element at address: 0x20002806db80 with size: 0.000244 MiB 00:04:36.908 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:04:36.908 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:04:36.908 element at address: 0x20002806de80 with size: 0.000244 MiB 00:04:36.908 element at address: 0x20002806df80 with size: 0.000244 MiB 00:04:36.908 element at address: 0x20002806e080 with size: 0.000244 MiB 00:04:36.908 element at address: 0x20002806e180 with size: 0.000244 MiB 00:04:36.908 element at address: 0x20002806e280 with size: 0.000244 MiB 00:04:36.908 element at address: 0x20002806e380 with size: 0.000244 MiB 00:04:36.908 element at address: 0x20002806e480 with size: 0.000244 MiB 00:04:36.908 element at address: 0x20002806e580 with size: 0.000244 MiB 00:04:36.908 element at address: 0x20002806e680 with size: 0.000244 MiB 00:04:36.908 element at address: 0x20002806e780 with size: 0.000244 MiB 00:04:36.908 element at address: 0x20002806e880 with size: 0.000244 MiB 00:04:36.908 element at address: 0x20002806e980 with size: 0.000244 MiB 00:04:36.908 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:04:36.908 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:04:36.908 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:04:36.908 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:04:36.908 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:04:36.908 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:04:36.908 element at address: 0x20002806f080 with size: 0.000244 MiB 00:04:36.908 element at address: 0x20002806f180 with size: 0.000244 MiB 00:04:36.908 element at address: 0x20002806f280 with size: 0.000244 MiB 00:04:36.908 element at address: 0x20002806f380 with size: 0.000244 MiB 00:04:36.908 element at address: 0x20002806f480 with size: 0.000244 MiB 00:04:36.908 element at address: 0x20002806f580 with size: 0.000244 MiB 00:04:36.908 element at address: 0x20002806f680 with size: 0.000244 MiB 00:04:36.908 element at address: 0x20002806f780 with size: 0.000244 MiB 00:04:36.908 element at address: 0x20002806f880 with size: 0.000244 MiB 00:04:36.908 element at address: 0x20002806f980 with size: 0.000244 MiB 00:04:36.908 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:04:36.908 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:04:36.908 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:04:36.908 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:04:36.908 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:04:36.908 list of memzone associated elements. size: 599.920898 MiB 00:04:36.908 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:04:36.908 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:36.908 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:04:36.908 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:36.908 element at address: 0x200012df4740 with size: 92.045105 MiB 00:04:36.908 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58132_0 00:04:36.908 element at address: 0x200000dff340 with size: 48.003113 MiB 00:04:36.908 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58132_0 00:04:36.908 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:04:36.908 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58132_0 00:04:36.908 element at address: 0x2000197be900 with size: 20.255615 MiB 00:04:36.908 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:36.908 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:04:36.908 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:36.908 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:04:36.908 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58132_0 00:04:36.908 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:04:36.908 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58132 00:04:36.908 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:04:36.908 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58132 00:04:36.908 element at address: 0x200018efde00 with size: 1.008179 MiB 00:04:36.908 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:36.908 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:04:36.908 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:36.908 element at address: 0x200018afde00 with size: 1.008179 MiB 00:04:36.908 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:36.908 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:04:36.908 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:36.908 element at address: 0x200000cff100 with size: 1.000549 MiB 00:04:36.908 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58132 00:04:36.908 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:04:36.908 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58132 00:04:36.908 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:04:36.908 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58132 00:04:36.908 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:04:36.908 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58132 00:04:36.908 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:04:36.908 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58132 00:04:36.908 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:04:36.908 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58132 00:04:36.908 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:04:36.908 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:36.908 element at address: 0x200012c72280 with size: 0.500549 MiB 00:04:36.908 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:36.908 element at address: 0x20001967c440 with size: 0.250549 MiB 00:04:36.908 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:36.908 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:04:36.908 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58132 00:04:36.908 element at address: 0x20000085df80 with size: 0.125549 MiB 00:04:36.908 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58132 00:04:36.908 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:04:36.908 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:36.908 element at address: 0x200028064140 with size: 0.023804 MiB 00:04:36.908 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:36.908 element at address: 0x200000859d40 with size: 0.016174 MiB 00:04:36.908 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58132 00:04:36.908 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:04:36.908 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:36.908 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:04:36.908 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58132 00:04:36.908 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:04:36.908 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58132 00:04:36.908 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:04:36.908 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58132 00:04:36.908 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:04:36.908 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:36.908 12:02:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:36.908 12:02:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58132 00:04:36.908 12:02:37 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58132 ']' 00:04:36.908 12:02:37 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58132 00:04:36.908 12:02:37 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:36.909 12:02:37 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:36.909 12:02:37 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58132 00:04:36.909 12:02:37 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:36.909 12:02:37 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:36.909 12:02:37 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58132' 00:04:36.909 killing process with pid 58132 00:04:36.909 12:02:37 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58132 00:04:36.909 12:02:37 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58132 00:04:38.825 00:04:38.825 real 0m2.946s 00:04:38.825 user 0m2.853s 00:04:38.825 sys 0m0.478s 00:04:38.825 12:02:39 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:38.825 12:02:39 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:38.825 ************************************ 00:04:38.825 END TEST dpdk_mem_utility 00:04:38.825 ************************************ 00:04:38.825 12:02:39 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:38.825 12:02:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:38.825 12:02:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.825 12:02:39 -- common/autotest_common.sh@10 -- # set +x 00:04:38.825 ************************************ 00:04:38.825 START TEST event 00:04:38.825 ************************************ 00:04:38.825 12:02:39 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:38.825 * Looking for test storage... 00:04:38.825 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:38.825 12:02:39 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:38.825 12:02:39 event -- common/autotest_common.sh@1693 -- # lcov --version 00:04:38.825 12:02:39 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:38.825 12:02:39 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:38.825 12:02:39 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:38.825 12:02:39 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:38.825 12:02:39 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:38.825 12:02:39 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:38.825 12:02:39 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:38.825 12:02:39 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:38.825 12:02:39 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:38.825 12:02:39 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:38.825 12:02:39 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:38.825 12:02:39 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:38.825 12:02:39 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:38.825 12:02:39 event -- scripts/common.sh@344 -- # case "$op" in 00:04:38.825 12:02:39 event -- scripts/common.sh@345 -- # : 1 00:04:38.825 12:02:39 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:38.825 12:02:39 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:38.825 12:02:39 event -- scripts/common.sh@365 -- # decimal 1 00:04:38.825 12:02:39 event -- scripts/common.sh@353 -- # local d=1 00:04:38.825 12:02:39 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:38.825 12:02:39 event -- scripts/common.sh@355 -- # echo 1 00:04:38.825 12:02:39 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:38.825 12:02:39 event -- scripts/common.sh@366 -- # decimal 2 00:04:38.825 12:02:39 event -- scripts/common.sh@353 -- # local d=2 00:04:38.825 12:02:39 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:38.825 12:02:39 event -- scripts/common.sh@355 -- # echo 2 00:04:38.825 12:02:39 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:38.825 12:02:39 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:38.825 12:02:39 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:38.825 12:02:39 event -- scripts/common.sh@368 -- # return 0 00:04:38.825 12:02:39 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:38.825 12:02:39 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:38.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.825 --rc genhtml_branch_coverage=1 00:04:38.825 --rc genhtml_function_coverage=1 00:04:38.825 --rc genhtml_legend=1 00:04:38.825 --rc geninfo_all_blocks=1 00:04:38.825 --rc geninfo_unexecuted_blocks=1 00:04:38.825 00:04:38.825 ' 00:04:38.825 12:02:39 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:38.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.825 --rc genhtml_branch_coverage=1 00:04:38.825 --rc genhtml_function_coverage=1 00:04:38.825 --rc genhtml_legend=1 00:04:38.825 --rc geninfo_all_blocks=1 00:04:38.825 --rc geninfo_unexecuted_blocks=1 00:04:38.825 00:04:38.825 ' 00:04:38.825 12:02:39 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:38.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.825 --rc genhtml_branch_coverage=1 00:04:38.825 --rc genhtml_function_coverage=1 00:04:38.825 --rc genhtml_legend=1 00:04:38.825 --rc geninfo_all_blocks=1 00:04:38.825 --rc geninfo_unexecuted_blocks=1 00:04:38.825 00:04:38.825 ' 00:04:38.825 12:02:39 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:38.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.825 --rc genhtml_branch_coverage=1 00:04:38.825 --rc genhtml_function_coverage=1 00:04:38.825 --rc genhtml_legend=1 00:04:38.825 --rc geninfo_all_blocks=1 00:04:38.825 --rc geninfo_unexecuted_blocks=1 00:04:38.825 00:04:38.825 ' 00:04:38.825 12:02:39 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:38.825 12:02:39 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:38.825 12:02:39 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:38.825 12:02:39 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:38.825 12:02:39 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.825 12:02:39 event -- common/autotest_common.sh@10 -- # set +x 00:04:38.825 ************************************ 00:04:38.825 START TEST event_perf 00:04:38.825 ************************************ 00:04:38.825 12:02:39 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:38.825 Running I/O for 1 seconds...[2024-11-25 12:02:39.821537] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:04:38.825 [2024-11-25 12:02:39.821756] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58229 ] 00:04:39.086 [2024-11-25 12:02:39.989822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:39.086 [2024-11-25 12:02:40.130213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:39.086 [2024-11-25 12:02:40.130465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:39.086 [2024-11-25 12:02:40.130571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.086 [2024-11-25 12:02:40.130587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:40.472 Running I/O for 1 seconds... 00:04:40.472 lcore 0: 136122 00:04:40.472 lcore 1: 136120 00:04:40.472 lcore 2: 136122 00:04:40.472 lcore 3: 136121 00:04:40.472 done. 00:04:40.472 00:04:40.472 real 0m1.533s 00:04:40.472 user 0m4.296s 00:04:40.472 sys 0m0.107s 00:04:40.472 ************************************ 00:04:40.472 END TEST event_perf 00:04:40.472 ************************************ 00:04:40.472 12:02:41 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.472 12:02:41 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:40.472 12:02:41 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:40.472 12:02:41 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:40.472 12:02:41 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.472 12:02:41 event -- common/autotest_common.sh@10 -- # set +x 00:04:40.472 ************************************ 00:04:40.472 START TEST event_reactor 00:04:40.472 ************************************ 00:04:40.472 12:02:41 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:40.472 [2024-11-25 12:02:41.427822] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:04:40.472 [2024-11-25 12:02:41.428219] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58274 ] 00:04:40.734 [2024-11-25 12:02:41.591748] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.734 [2024-11-25 12:02:41.727096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.121 test_start 00:04:42.121 oneshot 00:04:42.121 tick 100 00:04:42.121 tick 100 00:04:42.121 tick 250 00:04:42.121 tick 100 00:04:42.121 tick 100 00:04:42.121 tick 100 00:04:42.121 tick 250 00:04:42.121 tick 500 00:04:42.121 tick 100 00:04:42.121 tick 100 00:04:42.121 tick 250 00:04:42.121 tick 100 00:04:42.121 tick 100 00:04:42.121 test_end 00:04:42.121 00:04:42.121 real 0m1.512s 00:04:42.121 user 0m1.301s 00:04:42.121 sys 0m0.098s 00:04:42.121 ************************************ 00:04:42.121 END TEST event_reactor 00:04:42.121 ************************************ 00:04:42.121 12:02:42 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:42.121 12:02:42 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:42.121 12:02:42 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:42.121 12:02:42 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:42.121 12:02:42 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.121 12:02:42 event -- common/autotest_common.sh@10 -- # set +x 00:04:42.121 ************************************ 00:04:42.121 START TEST event_reactor_perf 00:04:42.121 ************************************ 00:04:42.121 12:02:42 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:42.121 [2024-11-25 12:02:43.011585] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:04:42.121 [2024-11-25 12:02:43.011744] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58305 ] 00:04:42.121 [2024-11-25 12:02:43.179858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.383 [2024-11-25 12:02:43.315643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.772 test_start 00:04:43.772 test_end 00:04:43.772 Performance: 305367 events per second 00:04:43.772 00:04:43.772 real 0m1.518s 00:04:43.772 user 0m1.317s 00:04:43.772 sys 0m0.088s 00:04:43.772 12:02:44 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:43.772 ************************************ 00:04:43.772 END TEST event_reactor_perf 00:04:43.772 ************************************ 00:04:43.772 12:02:44 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:43.772 12:02:44 event -- event/event.sh@49 -- # uname -s 00:04:43.772 12:02:44 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:43.772 12:02:44 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:43.772 12:02:44 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:43.772 12:02:44 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:43.772 12:02:44 event -- common/autotest_common.sh@10 -- # set +x 00:04:43.772 ************************************ 00:04:43.772 START TEST event_scheduler 00:04:43.772 ************************************ 00:04:43.772 12:02:44 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:43.772 * Looking for test storage... 00:04:43.772 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:43.772 12:02:44 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:43.772 12:02:44 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:04:43.772 12:02:44 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:43.772 12:02:44 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:43.772 12:02:44 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:43.772 12:02:44 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:43.772 12:02:44 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:43.772 12:02:44 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:43.772 12:02:44 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:43.772 12:02:44 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:43.773 12:02:44 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:43.773 12:02:44 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:43.773 12:02:44 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:43.773 12:02:44 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:43.773 12:02:44 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:43.773 12:02:44 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:43.773 12:02:44 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:43.773 12:02:44 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:43.773 12:02:44 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:43.773 12:02:44 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:43.773 12:02:44 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:43.773 12:02:44 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:43.773 12:02:44 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:43.773 12:02:44 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:43.773 12:02:44 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:43.773 12:02:44 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:43.773 12:02:44 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:43.773 12:02:44 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:43.773 12:02:44 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:43.773 12:02:44 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:43.773 12:02:44 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:43.773 12:02:44 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:43.773 12:02:44 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:43.773 12:02:44 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:43.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.773 --rc genhtml_branch_coverage=1 00:04:43.773 --rc genhtml_function_coverage=1 00:04:43.773 --rc genhtml_legend=1 00:04:43.773 --rc geninfo_all_blocks=1 00:04:43.773 --rc geninfo_unexecuted_blocks=1 00:04:43.773 00:04:43.773 ' 00:04:43.773 12:02:44 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:43.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.773 --rc genhtml_branch_coverage=1 00:04:43.773 --rc genhtml_function_coverage=1 00:04:43.773 --rc genhtml_legend=1 00:04:43.773 --rc geninfo_all_blocks=1 00:04:43.773 --rc geninfo_unexecuted_blocks=1 00:04:43.773 00:04:43.773 ' 00:04:43.773 12:02:44 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:43.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.773 --rc genhtml_branch_coverage=1 00:04:43.773 --rc genhtml_function_coverage=1 00:04:43.773 --rc genhtml_legend=1 00:04:43.773 --rc geninfo_all_blocks=1 00:04:43.773 --rc geninfo_unexecuted_blocks=1 00:04:43.773 00:04:43.773 ' 00:04:43.773 12:02:44 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:43.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.773 --rc genhtml_branch_coverage=1 00:04:43.773 --rc genhtml_function_coverage=1 00:04:43.773 --rc genhtml_legend=1 00:04:43.773 --rc geninfo_all_blocks=1 00:04:43.773 --rc geninfo_unexecuted_blocks=1 00:04:43.773 00:04:43.773 ' 00:04:43.773 12:02:44 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:43.773 12:02:44 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58381 00:04:43.773 12:02:44 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:43.773 12:02:44 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58381 00:04:43.773 12:02:44 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58381 ']' 00:04:43.773 12:02:44 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:43.773 12:02:44 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:43.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:43.773 12:02:44 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:43.773 12:02:44 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:43.773 12:02:44 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:43.773 12:02:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:43.773 [2024-11-25 12:02:44.818916] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:04:43.773 [2024-11-25 12:02:44.819101] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58381 ] 00:04:44.035 [2024-11-25 12:02:44.990500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:44.297 [2024-11-25 12:02:45.140653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.297 [2024-11-25 12:02:45.141138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:44.297 [2024-11-25 12:02:45.141268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:44.297 [2024-11-25 12:02:45.141317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:44.869 12:02:45 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:44.869 12:02:45 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:44.869 12:02:45 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:44.869 12:02:45 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.869 12:02:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:44.869 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:44.869 POWER: Cannot set governor of lcore 0 to userspace 00:04:44.869 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:44.869 POWER: Cannot set governor of lcore 0 to performance 00:04:44.869 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:44.869 POWER: Cannot set governor of lcore 0 to userspace 00:04:44.869 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:44.869 POWER: Cannot set governor of lcore 0 to userspace 00:04:44.869 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:04:44.869 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:44.869 POWER: Unable to set Power Management Environment for lcore 0 00:04:44.869 [2024-11-25 12:02:45.691469] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:04:44.869 [2024-11-25 12:02:45.691501] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:04:44.869 [2024-11-25 12:02:45.691512] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:44.869 [2024-11-25 12:02:45.691531] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:44.869 [2024-11-25 12:02:45.691540] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:44.869 [2024-11-25 12:02:45.691550] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:44.869 12:02:45 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.869 12:02:45 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:44.869 12:02:45 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.869 12:02:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:45.182 [2024-11-25 12:02:45.979308] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:45.182 12:02:45 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.182 12:02:45 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:45.182 12:02:45 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:45.182 12:02:45 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.182 12:02:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:45.182 ************************************ 00:04:45.182 START TEST scheduler_create_thread 00:04:45.182 ************************************ 00:04:45.182 12:02:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:45.182 12:02:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:45.182 12:02:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.182 12:02:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.182 2 00:04:45.182 12:02:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.182 12:02:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:45.182 12:02:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.182 12:02:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.182 3 00:04:45.182 12:02:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.182 12:02:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:45.182 12:02:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.182 12:02:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.182 4 00:04:45.182 12:02:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.182 12:02:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:45.182 12:02:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.182 12:02:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.182 5 00:04:45.182 12:02:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.182 12:02:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:45.182 12:02:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.182 12:02:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.182 6 00:04:45.182 12:02:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.182 12:02:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:45.182 12:02:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.182 12:02:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.182 7 00:04:45.182 12:02:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.182 12:02:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:45.182 12:02:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.182 12:02:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.182 8 00:04:45.182 12:02:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.182 12:02:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:45.182 12:02:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.182 12:02:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.182 9 00:04:45.182 12:02:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.182 12:02:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:45.182 12:02:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.182 12:02:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.182 10 00:04:45.182 12:02:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.182 12:02:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:45.182 12:02:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.182 12:02:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.182 12:02:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.182 12:02:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:45.182 12:02:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:45.182 12:02:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.182 12:02:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.182 12:02:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.182 12:02:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:45.182 12:02:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.182 12:02:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:46.584 12:02:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.584 12:02:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:46.584 12:02:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:46.584 12:02:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.584 12:02:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:47.968 ************************************ 00:04:47.968 END TEST scheduler_create_thread 00:04:47.968 ************************************ 00:04:47.968 12:02:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:47.968 00:04:47.968 real 0m2.619s 00:04:47.968 user 0m0.016s 00:04:47.968 sys 0m0.006s 00:04:47.968 12:02:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:47.968 12:02:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:47.968 12:02:48 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:47.968 12:02:48 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58381 00:04:47.968 12:02:48 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58381 ']' 00:04:47.968 12:02:48 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58381 00:04:47.968 12:02:48 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:47.968 12:02:48 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:47.968 12:02:48 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58381 00:04:47.968 killing process with pid 58381 00:04:47.968 12:02:48 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:47.968 12:02:48 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:47.968 12:02:48 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58381' 00:04:47.968 12:02:48 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58381 00:04:47.968 12:02:48 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58381 00:04:48.230 [2024-11-25 12:02:49.099916] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:49.171 00:04:49.171 real 0m5.363s 00:04:49.171 user 0m9.140s 00:04:49.171 sys 0m0.460s 00:04:49.171 12:02:49 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.171 ************************************ 00:04:49.171 END TEST event_scheduler 00:04:49.171 ************************************ 00:04:49.171 12:02:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:49.171 12:02:49 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:49.171 12:02:50 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:49.171 12:02:50 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:49.171 12:02:50 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.171 12:02:50 event -- common/autotest_common.sh@10 -- # set +x 00:04:49.171 ************************************ 00:04:49.171 START TEST app_repeat 00:04:49.171 ************************************ 00:04:49.171 12:02:50 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:49.171 12:02:50 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:49.171 12:02:50 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:49.171 12:02:50 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:49.171 12:02:50 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:49.171 12:02:50 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:49.171 12:02:50 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:49.171 12:02:50 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:49.171 12:02:50 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58487 00:04:49.171 12:02:50 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:49.171 Process app_repeat pid: 58487 00:04:49.171 12:02:50 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58487' 00:04:49.171 12:02:50 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:49.171 spdk_app_start Round 0 00:04:49.171 12:02:50 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:49.171 12:02:50 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58487 /var/tmp/spdk-nbd.sock 00:04:49.171 12:02:50 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58487 ']' 00:04:49.171 12:02:50 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:49.171 12:02:50 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:49.171 12:02:50 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:49.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:49.171 12:02:50 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:49.171 12:02:50 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:49.171 12:02:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:49.171 [2024-11-25 12:02:50.069066] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:04:49.171 [2024-11-25 12:02:50.069203] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58487 ] 00:04:49.171 [2024-11-25 12:02:50.229941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:49.432 [2024-11-25 12:02:50.364489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:49.432 [2024-11-25 12:02:50.364489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.004 12:02:50 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:50.004 12:02:50 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:50.005 12:02:50 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:50.266 Malloc0 00:04:50.266 12:02:51 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:50.527 Malloc1 00:04:50.527 12:02:51 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:50.527 12:02:51 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:50.527 12:02:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:50.527 12:02:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:50.527 12:02:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:50.527 12:02:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:50.527 12:02:51 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:50.527 12:02:51 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:50.528 12:02:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:50.528 12:02:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:50.528 12:02:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:50.528 12:02:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:50.528 12:02:51 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:50.528 12:02:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:50.528 12:02:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:50.528 12:02:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:50.789 /dev/nbd0 00:04:50.789 12:02:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:50.789 12:02:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:50.789 12:02:51 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:50.789 12:02:51 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:50.789 12:02:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:50.789 12:02:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:50.789 12:02:51 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:50.789 12:02:51 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:50.789 12:02:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:50.789 12:02:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:50.789 12:02:51 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:50.789 1+0 records in 00:04:50.789 1+0 records out 00:04:50.789 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000330397 s, 12.4 MB/s 00:04:50.789 12:02:51 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:50.789 12:02:51 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:50.789 12:02:51 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:50.789 12:02:51 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:50.789 12:02:51 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:50.789 12:02:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:50.789 12:02:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:50.789 12:02:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:51.050 /dev/nbd1 00:04:51.050 12:02:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:51.050 12:02:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:51.050 12:02:52 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:51.050 12:02:52 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:51.050 12:02:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:51.050 12:02:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:51.050 12:02:52 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:51.050 12:02:52 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:51.050 12:02:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:51.050 12:02:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:51.050 12:02:52 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:51.050 1+0 records in 00:04:51.050 1+0 records out 00:04:51.050 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000605117 s, 6.8 MB/s 00:04:51.050 12:02:52 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:51.050 12:02:52 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:51.050 12:02:52 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:51.050 12:02:52 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:51.050 12:02:52 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:51.050 12:02:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:51.050 12:02:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:51.050 12:02:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:51.050 12:02:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.050 12:02:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:51.311 12:02:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:51.311 { 00:04:51.311 "nbd_device": "/dev/nbd0", 00:04:51.311 "bdev_name": "Malloc0" 00:04:51.311 }, 00:04:51.311 { 00:04:51.311 "nbd_device": "/dev/nbd1", 00:04:51.311 "bdev_name": "Malloc1" 00:04:51.311 } 00:04:51.311 ]' 00:04:51.311 12:02:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:51.311 { 00:04:51.311 "nbd_device": "/dev/nbd0", 00:04:51.311 "bdev_name": "Malloc0" 00:04:51.311 }, 00:04:51.311 { 00:04:51.311 "nbd_device": "/dev/nbd1", 00:04:51.311 "bdev_name": "Malloc1" 00:04:51.311 } 00:04:51.311 ]' 00:04:51.311 12:02:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:51.311 12:02:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:51.311 /dev/nbd1' 00:04:51.311 12:02:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:51.311 /dev/nbd1' 00:04:51.311 12:02:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:51.311 12:02:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:51.311 12:02:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:51.311 12:02:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:51.311 12:02:52 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:51.311 12:02:52 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:51.311 12:02:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.311 12:02:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:51.311 12:02:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:51.311 12:02:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:51.311 12:02:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:51.311 12:02:52 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:51.311 256+0 records in 00:04:51.311 256+0 records out 00:04:51.311 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00844229 s, 124 MB/s 00:04:51.311 12:02:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:51.311 12:02:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:51.311 256+0 records in 00:04:51.311 256+0 records out 00:04:51.311 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0234268 s, 44.8 MB/s 00:04:51.311 12:02:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:51.311 12:02:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:51.573 256+0 records in 00:04:51.573 256+0 records out 00:04:51.573 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0310881 s, 33.7 MB/s 00:04:51.573 12:02:52 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:51.573 12:02:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.573 12:02:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:51.573 12:02:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:51.573 12:02:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:51.573 12:02:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:51.573 12:02:52 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:51.574 12:02:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:51.574 12:02:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:51.574 12:02:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:51.574 12:02:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:51.574 12:02:52 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:51.574 12:02:52 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:51.574 12:02:52 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.574 12:02:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.574 12:02:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:51.574 12:02:52 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:51.574 12:02:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:51.574 12:02:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:51.835 12:02:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:51.835 12:02:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:51.835 12:02:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:51.835 12:02:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:51.835 12:02:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:51.835 12:02:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:51.835 12:02:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:51.835 12:02:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:51.835 12:02:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:51.835 12:02:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:51.835 12:02:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:51.835 12:02:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:51.835 12:02:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:51.835 12:02:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:51.835 12:02:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:51.835 12:02:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:51.835 12:02:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:51.835 12:02:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:51.835 12:02:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:51.835 12:02:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.835 12:02:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:52.095 12:02:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:52.095 12:02:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:52.095 12:02:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:52.095 12:02:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:52.095 12:02:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:52.095 12:02:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:52.095 12:02:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:52.095 12:02:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:52.095 12:02:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:52.095 12:02:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:52.095 12:02:53 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:52.095 12:02:53 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:52.095 12:02:53 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:52.666 12:02:53 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:53.239 [2024-11-25 12:02:54.313037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:53.499 [2024-11-25 12:02:54.448112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:53.499 [2024-11-25 12:02:54.448354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.758 [2024-11-25 12:02:54.608179] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:53.758 [2024-11-25 12:02:54.608258] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:55.667 spdk_app_start Round 1 00:04:55.667 12:02:56 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:55.667 12:02:56 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:55.667 12:02:56 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58487 /var/tmp/spdk-nbd.sock 00:04:55.667 12:02:56 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58487 ']' 00:04:55.667 12:02:56 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:55.667 12:02:56 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:55.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:55.668 12:02:56 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:55.668 12:02:56 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:55.668 12:02:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:55.668 12:02:56 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:55.668 12:02:56 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:55.668 12:02:56 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:55.929 Malloc0 00:04:55.929 12:02:57 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:56.189 Malloc1 00:04:56.450 12:02:57 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:56.450 12:02:57 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.450 12:02:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:56.450 12:02:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:56.450 12:02:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.450 12:02:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:56.450 12:02:57 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:56.450 12:02:57 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.450 12:02:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:56.450 12:02:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:56.450 12:02:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.450 12:02:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:56.450 12:02:57 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:56.450 12:02:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:56.450 12:02:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:56.450 12:02:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:56.450 /dev/nbd0 00:04:56.450 12:02:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:56.450 12:02:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:56.450 12:02:57 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:56.450 12:02:57 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:56.450 12:02:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:56.450 12:02:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:56.450 12:02:57 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:56.450 12:02:57 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:56.450 12:02:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:56.450 12:02:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:56.450 12:02:57 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:56.712 1+0 records in 00:04:56.712 1+0 records out 00:04:56.712 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000418591 s, 9.8 MB/s 00:04:56.712 12:02:57 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:56.712 12:02:57 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:56.712 12:02:57 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:56.712 12:02:57 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:56.712 12:02:57 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:56.712 12:02:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:56.712 12:02:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:56.712 12:02:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:56.712 /dev/nbd1 00:04:56.973 12:02:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:56.973 12:02:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:56.973 12:02:57 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:56.973 12:02:57 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:56.973 12:02:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:56.973 12:02:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:56.973 12:02:57 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:56.973 12:02:57 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:56.973 12:02:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:56.973 12:02:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:56.973 12:02:57 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:56.973 1+0 records in 00:04:56.973 1+0 records out 00:04:56.973 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000272132 s, 15.1 MB/s 00:04:56.973 12:02:57 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:56.973 12:02:57 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:56.973 12:02:57 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:56.973 12:02:57 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:56.973 12:02:57 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:56.973 12:02:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:56.973 12:02:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:56.973 12:02:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:56.973 12:02:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.973 12:02:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:56.973 12:02:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:56.973 { 00:04:56.973 "nbd_device": "/dev/nbd0", 00:04:56.973 "bdev_name": "Malloc0" 00:04:56.973 }, 00:04:56.973 { 00:04:56.973 "nbd_device": "/dev/nbd1", 00:04:56.973 "bdev_name": "Malloc1" 00:04:56.973 } 00:04:56.973 ]' 00:04:56.973 12:02:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:56.973 { 00:04:56.973 "nbd_device": "/dev/nbd0", 00:04:56.973 "bdev_name": "Malloc0" 00:04:56.973 }, 00:04:56.973 { 00:04:56.973 "nbd_device": "/dev/nbd1", 00:04:56.973 "bdev_name": "Malloc1" 00:04:56.973 } 00:04:56.973 ]' 00:04:56.973 12:02:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:57.234 12:02:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:57.234 /dev/nbd1' 00:04:57.234 12:02:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:57.234 /dev/nbd1' 00:04:57.234 12:02:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:57.234 12:02:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:57.234 12:02:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:57.234 12:02:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:57.234 12:02:58 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:57.234 12:02:58 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:57.234 12:02:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.234 12:02:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:57.234 12:02:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:57.234 12:02:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:57.234 12:02:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:57.234 12:02:58 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:57.234 256+0 records in 00:04:57.234 256+0 records out 00:04:57.234 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00437448 s, 240 MB/s 00:04:57.234 12:02:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:57.234 12:02:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:57.234 256+0 records in 00:04:57.234 256+0 records out 00:04:57.234 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0240187 s, 43.7 MB/s 00:04:57.234 12:02:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:57.234 12:02:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:57.234 256+0 records in 00:04:57.234 256+0 records out 00:04:57.234 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0323861 s, 32.4 MB/s 00:04:57.234 12:02:58 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:57.234 12:02:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.234 12:02:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:57.234 12:02:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:57.234 12:02:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:57.234 12:02:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:57.234 12:02:58 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:57.234 12:02:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:57.234 12:02:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:57.234 12:02:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:57.234 12:02:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:57.234 12:02:58 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:57.234 12:02:58 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:57.234 12:02:58 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.234 12:02:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.234 12:02:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:57.234 12:02:58 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:57.234 12:02:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:57.234 12:02:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:57.494 12:02:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:57.494 12:02:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:57.494 12:02:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:57.494 12:02:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:57.494 12:02:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:57.494 12:02:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:57.494 12:02:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:57.494 12:02:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:57.495 12:02:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:57.495 12:02:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:57.755 12:02:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:57.755 12:02:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:57.755 12:02:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:57.755 12:02:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:57.756 12:02:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:57.756 12:02:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:57.756 12:02:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:57.756 12:02:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:57.756 12:02:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:57.756 12:02:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.756 12:02:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:58.017 12:02:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:58.017 12:02:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:58.017 12:02:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:58.017 12:02:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:58.017 12:02:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:58.017 12:02:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:58.017 12:02:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:58.017 12:02:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:58.017 12:02:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:58.017 12:02:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:58.017 12:02:58 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:58.017 12:02:58 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:58.017 12:02:58 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:58.277 12:02:59 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:59.220 [2024-11-25 12:02:59.995599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:59.220 [2024-11-25 12:03:00.110274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:59.220 [2024-11-25 12:03:00.110459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.220 [2024-11-25 12:03:00.246789] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:59.220 [2024-11-25 12:03:00.246865] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:01.761 12:03:02 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:01.761 spdk_app_start Round 2 00:05:01.761 12:03:02 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:01.761 12:03:02 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58487 /var/tmp/spdk-nbd.sock 00:05:01.761 12:03:02 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58487 ']' 00:05:01.761 12:03:02 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:01.761 12:03:02 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:01.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:01.761 12:03:02 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:01.761 12:03:02 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:01.761 12:03:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:01.761 12:03:02 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:01.761 12:03:02 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:01.762 12:03:02 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:01.762 Malloc0 00:05:01.762 12:03:02 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:02.019 Malloc1 00:05:02.019 12:03:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:02.019 12:03:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:02.020 12:03:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:02.020 12:03:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:02.020 12:03:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:02.020 12:03:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:02.020 12:03:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:02.020 12:03:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:02.020 12:03:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:02.020 12:03:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:02.020 12:03:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:02.020 12:03:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:02.020 12:03:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:02.020 12:03:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:02.020 12:03:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:02.020 12:03:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:02.278 /dev/nbd0 00:05:02.278 12:03:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:02.278 12:03:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:02.278 12:03:03 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:02.278 12:03:03 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:02.278 12:03:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:02.278 12:03:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:02.278 12:03:03 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:02.278 12:03:03 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:02.278 12:03:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:02.278 12:03:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:02.278 12:03:03 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:02.278 1+0 records in 00:05:02.278 1+0 records out 00:05:02.278 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00020321 s, 20.2 MB/s 00:05:02.278 12:03:03 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:02.278 12:03:03 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:02.278 12:03:03 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:02.278 12:03:03 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:02.278 12:03:03 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:02.278 12:03:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:02.278 12:03:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:02.278 12:03:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:02.536 /dev/nbd1 00:05:02.536 12:03:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:02.536 12:03:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:02.536 12:03:03 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:02.536 12:03:03 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:02.536 12:03:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:02.536 12:03:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:02.536 12:03:03 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:02.536 12:03:03 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:02.536 12:03:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:02.536 12:03:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:02.536 12:03:03 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:02.536 1+0 records in 00:05:02.536 1+0 records out 00:05:02.536 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000196814 s, 20.8 MB/s 00:05:02.536 12:03:03 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:02.536 12:03:03 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:02.536 12:03:03 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:02.536 12:03:03 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:02.536 12:03:03 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:02.536 12:03:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:02.536 12:03:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:02.536 12:03:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:02.536 12:03:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:02.536 12:03:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:02.795 12:03:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:02.795 { 00:05:02.795 "nbd_device": "/dev/nbd0", 00:05:02.795 "bdev_name": "Malloc0" 00:05:02.795 }, 00:05:02.795 { 00:05:02.795 "nbd_device": "/dev/nbd1", 00:05:02.795 "bdev_name": "Malloc1" 00:05:02.795 } 00:05:02.795 ]' 00:05:02.795 12:03:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:02.795 { 00:05:02.795 "nbd_device": "/dev/nbd0", 00:05:02.795 "bdev_name": "Malloc0" 00:05:02.795 }, 00:05:02.795 { 00:05:02.795 "nbd_device": "/dev/nbd1", 00:05:02.795 "bdev_name": "Malloc1" 00:05:02.795 } 00:05:02.795 ]' 00:05:02.795 12:03:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:02.795 12:03:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:02.795 /dev/nbd1' 00:05:02.795 12:03:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:02.795 12:03:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:02.795 /dev/nbd1' 00:05:02.795 12:03:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:02.795 12:03:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:02.795 12:03:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:02.795 12:03:03 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:02.795 12:03:03 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:02.795 12:03:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:02.795 12:03:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:02.795 12:03:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:02.795 12:03:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:02.795 12:03:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:02.795 12:03:03 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:02.795 256+0 records in 00:05:02.795 256+0 records out 00:05:02.795 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00514912 s, 204 MB/s 00:05:02.795 12:03:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:02.795 12:03:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:02.795 256+0 records in 00:05:02.795 256+0 records out 00:05:02.795 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0161611 s, 64.9 MB/s 00:05:02.795 12:03:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:02.795 12:03:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:02.795 256+0 records in 00:05:02.795 256+0 records out 00:05:02.795 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0177044 s, 59.2 MB/s 00:05:02.796 12:03:03 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:02.796 12:03:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:02.796 12:03:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:02.796 12:03:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:02.796 12:03:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:02.796 12:03:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:02.796 12:03:03 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:02.796 12:03:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:02.796 12:03:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:02.796 12:03:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:02.796 12:03:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:02.796 12:03:03 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:02.796 12:03:03 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:02.796 12:03:03 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:02.796 12:03:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:02.796 12:03:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:02.796 12:03:03 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:02.796 12:03:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:02.796 12:03:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:03.054 12:03:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:03.054 12:03:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:03.054 12:03:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:03.054 12:03:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:03.054 12:03:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:03.054 12:03:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:03.054 12:03:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:03.054 12:03:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:03.054 12:03:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:03.054 12:03:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:03.313 12:03:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:03.313 12:03:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:03.313 12:03:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:03.313 12:03:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:03.313 12:03:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:03.313 12:03:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:03.313 12:03:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:03.313 12:03:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:03.313 12:03:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:03.313 12:03:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.313 12:03:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:03.313 12:03:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:03.313 12:03:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:03.313 12:03:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:03.572 12:03:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:03.572 12:03:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:03.572 12:03:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:03.572 12:03:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:03.572 12:03:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:03.572 12:03:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:03.572 12:03:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:03.572 12:03:04 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:03.572 12:03:04 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:03.572 12:03:04 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:03.830 12:03:04 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:04.396 [2024-11-25 12:03:05.269298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:04.396 [2024-11-25 12:03:05.341845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:04.396 [2024-11-25 12:03:05.341857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.396 [2024-11-25 12:03:05.446306] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:04.396 [2024-11-25 12:03:05.446359] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:06.955 12:03:07 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58487 /var/tmp/spdk-nbd.sock 00:05:06.955 12:03:07 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58487 ']' 00:05:06.955 12:03:07 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:06.955 12:03:07 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:06.955 12:03:07 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:06.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:06.955 12:03:07 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:06.955 12:03:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:06.955 12:03:07 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:06.955 12:03:07 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:06.955 12:03:07 event.app_repeat -- event/event.sh@39 -- # killprocess 58487 00:05:06.955 12:03:07 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58487 ']' 00:05:06.955 12:03:07 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58487 00:05:06.955 12:03:07 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:06.955 12:03:07 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:06.955 12:03:07 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58487 00:05:06.955 killing process with pid 58487 00:05:06.955 12:03:07 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:06.955 12:03:07 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:06.955 12:03:07 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58487' 00:05:06.955 12:03:07 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58487 00:05:06.955 12:03:07 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58487 00:05:07.554 spdk_app_start is called in Round 0. 00:05:07.554 Shutdown signal received, stop current app iteration 00:05:07.554 Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 reinitialization... 00:05:07.554 spdk_app_start is called in Round 1. 00:05:07.554 Shutdown signal received, stop current app iteration 00:05:07.554 Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 reinitialization... 00:05:07.554 spdk_app_start is called in Round 2. 00:05:07.554 Shutdown signal received, stop current app iteration 00:05:07.554 Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 reinitialization... 00:05:07.554 spdk_app_start is called in Round 3. 00:05:07.554 Shutdown signal received, stop current app iteration 00:05:07.554 ************************************ 00:05:07.554 END TEST app_repeat 00:05:07.554 ************************************ 00:05:07.554 12:03:08 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:07.554 12:03:08 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:07.554 00:05:07.554 real 0m18.456s 00:05:07.554 user 0m40.167s 00:05:07.554 sys 0m2.473s 00:05:07.554 12:03:08 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:07.554 12:03:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:07.554 12:03:08 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:07.554 12:03:08 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:07.554 12:03:08 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:07.554 12:03:08 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:07.554 12:03:08 event -- common/autotest_common.sh@10 -- # set +x 00:05:07.554 ************************************ 00:05:07.554 START TEST cpu_locks 00:05:07.554 ************************************ 00:05:07.554 12:03:08 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:07.554 * Looking for test storage... 00:05:07.554 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:07.554 12:03:08 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:07.554 12:03:08 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:05:07.554 12:03:08 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:07.813 12:03:08 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:07.813 12:03:08 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:07.813 12:03:08 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:07.813 12:03:08 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:07.813 12:03:08 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:07.813 12:03:08 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:07.813 12:03:08 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:07.813 12:03:08 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:07.813 12:03:08 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:07.813 12:03:08 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:07.813 12:03:08 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:07.813 12:03:08 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:07.813 12:03:08 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:07.813 12:03:08 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:07.813 12:03:08 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:07.813 12:03:08 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:07.813 12:03:08 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:07.813 12:03:08 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:07.813 12:03:08 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:07.813 12:03:08 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:07.813 12:03:08 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:07.813 12:03:08 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:07.813 12:03:08 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:07.813 12:03:08 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:07.813 12:03:08 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:07.813 12:03:08 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:07.813 12:03:08 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:07.813 12:03:08 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:07.813 12:03:08 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:07.813 12:03:08 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:07.813 12:03:08 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:07.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.813 --rc genhtml_branch_coverage=1 00:05:07.813 --rc genhtml_function_coverage=1 00:05:07.813 --rc genhtml_legend=1 00:05:07.813 --rc geninfo_all_blocks=1 00:05:07.813 --rc geninfo_unexecuted_blocks=1 00:05:07.813 00:05:07.813 ' 00:05:07.813 12:03:08 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:07.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.813 --rc genhtml_branch_coverage=1 00:05:07.813 --rc genhtml_function_coverage=1 00:05:07.813 --rc genhtml_legend=1 00:05:07.813 --rc geninfo_all_blocks=1 00:05:07.813 --rc geninfo_unexecuted_blocks=1 00:05:07.813 00:05:07.813 ' 00:05:07.813 12:03:08 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:07.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.813 --rc genhtml_branch_coverage=1 00:05:07.813 --rc genhtml_function_coverage=1 00:05:07.813 --rc genhtml_legend=1 00:05:07.813 --rc geninfo_all_blocks=1 00:05:07.813 --rc geninfo_unexecuted_blocks=1 00:05:07.813 00:05:07.813 ' 00:05:07.814 12:03:08 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:07.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.814 --rc genhtml_branch_coverage=1 00:05:07.814 --rc genhtml_function_coverage=1 00:05:07.814 --rc genhtml_legend=1 00:05:07.814 --rc geninfo_all_blocks=1 00:05:07.814 --rc geninfo_unexecuted_blocks=1 00:05:07.814 00:05:07.814 ' 00:05:07.814 12:03:08 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:07.814 12:03:08 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:07.814 12:03:08 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:07.814 12:03:08 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:07.814 12:03:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:07.814 12:03:08 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:07.814 12:03:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:07.814 ************************************ 00:05:07.814 START TEST default_locks 00:05:07.814 ************************************ 00:05:07.814 12:03:08 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:07.814 12:03:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58923 00:05:07.814 12:03:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58923 00:05:07.814 12:03:08 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58923 ']' 00:05:07.814 12:03:08 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.814 12:03:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:07.814 12:03:08 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:07.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.814 12:03:08 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.814 12:03:08 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:07.814 12:03:08 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:07.814 [2024-11-25 12:03:08.770887] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:05:07.814 [2024-11-25 12:03:08.771021] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58923 ] 00:05:08.074 [2024-11-25 12:03:08.926404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.074 [2024-11-25 12:03:09.027855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.681 12:03:09 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:08.681 12:03:09 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:08.681 12:03:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58923 00:05:08.681 12:03:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58923 00:05:08.681 12:03:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:08.941 12:03:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58923 00:05:08.941 12:03:09 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58923 ']' 00:05:08.941 12:03:09 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58923 00:05:08.941 12:03:09 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:08.941 12:03:09 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:08.941 12:03:09 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58923 00:05:08.941 killing process with pid 58923 00:05:08.941 12:03:09 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:08.941 12:03:09 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:08.941 12:03:09 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58923' 00:05:08.941 12:03:09 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58923 00:05:08.941 12:03:09 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58923 00:05:10.855 12:03:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58923 00:05:10.855 12:03:11 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:10.855 12:03:11 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58923 00:05:10.855 12:03:11 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:10.855 12:03:11 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:10.855 12:03:11 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:10.855 12:03:11 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:10.855 12:03:11 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58923 00:05:10.855 12:03:11 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58923 ']' 00:05:10.855 12:03:11 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.855 12:03:11 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:10.855 12:03:11 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.855 12:03:11 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:10.855 12:03:11 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:10.855 ERROR: process (pid: 58923) is no longer running 00:05:10.855 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58923) - No such process 00:05:10.855 12:03:11 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:10.855 12:03:11 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:10.855 12:03:11 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:10.855 12:03:11 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:10.855 12:03:11 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:10.855 12:03:11 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:10.855 12:03:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:10.855 12:03:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:10.855 12:03:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:10.855 12:03:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:10.855 00:05:10.855 real 0m2.768s 00:05:10.855 user 0m2.744s 00:05:10.855 sys 0m0.444s 00:05:10.855 12:03:11 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.855 12:03:11 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:10.855 ************************************ 00:05:10.855 END TEST default_locks 00:05:10.855 ************************************ 00:05:10.855 12:03:11 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:10.855 12:03:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:10.855 12:03:11 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.855 12:03:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:10.855 ************************************ 00:05:10.855 START TEST default_locks_via_rpc 00:05:10.855 ************************************ 00:05:10.855 12:03:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:10.855 12:03:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58987 00:05:10.855 12:03:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58987 00:05:10.855 12:03:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:10.855 12:03:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58987 ']' 00:05:10.855 12:03:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.855 12:03:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:10.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.855 12:03:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.855 12:03:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:10.855 12:03:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.855 [2024-11-25 12:03:11.609059] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:05:10.855 [2024-11-25 12:03:11.609565] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58987 ] 00:05:10.855 [2024-11-25 12:03:11.770838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.855 [2024-11-25 12:03:11.871035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.495 12:03:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:11.495 12:03:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:11.495 12:03:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:11.495 12:03:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.495 12:03:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.495 12:03:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.495 12:03:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:11.495 12:03:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:11.495 12:03:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:11.495 12:03:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:11.495 12:03:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:11.495 12:03:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.495 12:03:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.495 12:03:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.495 12:03:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58987 00:05:11.495 12:03:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:11.495 12:03:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58987 00:05:11.756 12:03:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58987 00:05:11.756 12:03:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58987 ']' 00:05:11.756 12:03:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58987 00:05:11.756 12:03:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:11.756 12:03:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:11.756 12:03:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58987 00:05:11.756 12:03:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:11.756 12:03:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:11.756 12:03:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58987' 00:05:11.756 killing process with pid 58987 00:05:11.756 12:03:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58987 00:05:11.756 12:03:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58987 00:05:13.733 00:05:13.733 real 0m2.704s 00:05:13.733 user 0m2.718s 00:05:13.733 sys 0m0.439s 00:05:13.733 ************************************ 00:05:13.733 12:03:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.733 12:03:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.733 END TEST default_locks_via_rpc 00:05:13.733 ************************************ 00:05:13.733 12:03:14 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:13.733 12:03:14 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:13.733 12:03:14 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.733 12:03:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:13.733 ************************************ 00:05:13.733 START TEST non_locking_app_on_locked_coremask 00:05:13.733 ************************************ 00:05:13.733 12:03:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:13.733 12:03:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59045 00:05:13.733 12:03:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59045 /var/tmp/spdk.sock 00:05:13.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.733 12:03:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59045 ']' 00:05:13.733 12:03:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.733 12:03:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:13.733 12:03:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.733 12:03:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:13.733 12:03:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:13.733 12:03:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:13.733 [2024-11-25 12:03:14.374121] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:05:13.733 [2024-11-25 12:03:14.374249] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59045 ] 00:05:13.733 [2024-11-25 12:03:14.539351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.733 [2024-11-25 12:03:14.659197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.305 12:03:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:14.305 12:03:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:14.305 12:03:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59061 00:05:14.305 12:03:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:14.305 12:03:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59061 /var/tmp/spdk2.sock 00:05:14.305 12:03:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59061 ']' 00:05:14.305 12:03:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:14.305 12:03:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:14.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:14.305 12:03:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:14.305 12:03:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:14.305 12:03:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:14.565 [2024-11-25 12:03:15.444270] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:05:14.565 [2024-11-25 12:03:15.444418] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59061 ] 00:05:14.565 [2024-11-25 12:03:15.621171] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:14.565 [2024-11-25 12:03:15.621220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.822 [2024-11-25 12:03:15.820813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.205 12:03:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:16.205 12:03:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:16.205 12:03:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59045 00:05:16.205 12:03:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:16.205 12:03:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59045 00:05:16.205 12:03:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59045 00:05:16.205 12:03:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59045 ']' 00:05:16.205 12:03:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59045 00:05:16.205 12:03:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:16.205 12:03:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:16.205 12:03:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59045 00:05:16.205 12:03:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:16.205 12:03:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:16.205 killing process with pid 59045 00:05:16.205 12:03:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59045' 00:05:16.205 12:03:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59045 00:05:16.205 12:03:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59045 00:05:19.500 12:03:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59061 00:05:19.500 12:03:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59061 ']' 00:05:19.500 12:03:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59061 00:05:19.500 12:03:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:19.500 12:03:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:19.500 12:03:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59061 00:05:19.500 killing process with pid 59061 00:05:19.500 12:03:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:19.500 12:03:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:19.500 12:03:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59061' 00:05:19.500 12:03:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59061 00:05:19.500 12:03:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59061 00:05:21.416 ************************************ 00:05:21.416 END TEST non_locking_app_on_locked_coremask 00:05:21.416 ************************************ 00:05:21.416 00:05:21.416 real 0m7.834s 00:05:21.416 user 0m7.974s 00:05:21.416 sys 0m0.967s 00:05:21.416 12:03:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.416 12:03:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:21.416 12:03:22 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:21.416 12:03:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:21.416 12:03:22 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.416 12:03:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:21.416 ************************************ 00:05:21.416 START TEST locking_app_on_unlocked_coremask 00:05:21.416 ************************************ 00:05:21.416 12:03:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:21.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.416 12:03:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59168 00:05:21.416 12:03:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59168 /var/tmp/spdk.sock 00:05:21.416 12:03:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59168 ']' 00:05:21.416 12:03:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.416 12:03:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:21.416 12:03:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.416 12:03:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:21.416 12:03:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:21.416 12:03:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:21.416 [2024-11-25 12:03:22.287311] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:05:21.416 [2024-11-25 12:03:22.287690] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59168 ] 00:05:21.416 [2024-11-25 12:03:22.453671] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:21.416 [2024-11-25 12:03:22.453746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.678 [2024-11-25 12:03:22.588547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:22.250 12:03:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:22.250 12:03:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:22.250 12:03:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59190 00:05:22.250 12:03:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59190 /var/tmp/spdk2.sock 00:05:22.250 12:03:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:22.250 12:03:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59190 ']' 00:05:22.250 12:03:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:22.250 12:03:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:22.250 12:03:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:22.250 12:03:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:22.250 12:03:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:22.511 [2024-11-25 12:03:23.408086] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:05:22.511 [2024-11-25 12:03:23.409084] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59190 ] 00:05:22.771 [2024-11-25 12:03:23.590281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.771 [2024-11-25 12:03:23.847972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.152 12:03:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:24.152 12:03:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:24.152 12:03:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59190 00:05:24.152 12:03:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59190 00:05:24.152 12:03:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:24.415 12:03:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59168 00:05:24.415 12:03:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59168 ']' 00:05:24.415 12:03:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59168 00:05:24.415 12:03:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:24.415 12:03:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:24.415 12:03:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59168 00:05:24.415 killing process with pid 59168 00:05:24.415 12:03:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:24.415 12:03:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:24.415 12:03:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59168' 00:05:24.415 12:03:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59168 00:05:24.415 12:03:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59168 00:05:27.730 12:03:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59190 00:05:27.730 12:03:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59190 ']' 00:05:27.730 12:03:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59190 00:05:27.730 12:03:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:27.730 12:03:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:27.731 12:03:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59190 00:05:27.731 12:03:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:27.731 12:03:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:27.731 12:03:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59190' 00:05:27.731 killing process with pid 59190 00:05:27.731 12:03:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59190 00:05:27.731 12:03:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59190 00:05:29.644 00:05:29.644 real 0m8.032s 00:05:29.644 user 0m8.146s 00:05:29.644 sys 0m1.045s 00:05:29.644 12:03:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.644 ************************************ 00:05:29.644 END TEST locking_app_on_unlocked_coremask 00:05:29.644 ************************************ 00:05:29.644 12:03:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:29.644 12:03:30 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:29.644 12:03:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:29.644 12:03:30 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.644 12:03:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:29.644 ************************************ 00:05:29.644 START TEST locking_app_on_locked_coremask 00:05:29.644 ************************************ 00:05:29.644 12:03:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:29.644 12:03:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59297 00:05:29.644 12:03:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59297 /var/tmp/spdk.sock 00:05:29.644 12:03:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:29.644 12:03:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59297 ']' 00:05:29.644 12:03:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.644 12:03:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:29.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.644 12:03:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.645 12:03:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:29.645 12:03:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:29.645 [2024-11-25 12:03:30.378355] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:05:29.645 [2024-11-25 12:03:30.378483] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59297 ] 00:05:29.645 [2024-11-25 12:03:30.536971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.645 [2024-11-25 12:03:30.638957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.241 12:03:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:30.241 12:03:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:30.241 12:03:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59313 00:05:30.241 12:03:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59313 /var/tmp/spdk2.sock 00:05:30.241 12:03:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:30.241 12:03:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59313 /var/tmp/spdk2.sock 00:05:30.241 12:03:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:30.241 12:03:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:30.241 12:03:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:30.241 12:03:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:30.241 12:03:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:30.241 12:03:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59313 /var/tmp/spdk2.sock 00:05:30.241 12:03:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59313 ']' 00:05:30.241 12:03:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:30.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:30.241 12:03:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:30.241 12:03:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:30.241 12:03:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:30.241 12:03:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:30.502 [2024-11-25 12:03:31.353736] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:05:30.502 [2024-11-25 12:03:31.353860] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59313 ] 00:05:30.502 [2024-11-25 12:03:31.529332] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59297 has claimed it. 00:05:30.502 [2024-11-25 12:03:31.529396] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:31.072 ERROR: process (pid: 59313) is no longer running 00:05:31.072 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59313) - No such process 00:05:31.072 12:03:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:31.073 12:03:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:31.073 12:03:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:31.073 12:03:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:31.073 12:03:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:31.073 12:03:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:31.073 12:03:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59297 00:05:31.073 12:03:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59297 00:05:31.073 12:03:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:31.335 12:03:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59297 00:05:31.335 12:03:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59297 ']' 00:05:31.335 12:03:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59297 00:05:31.335 12:03:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:31.335 12:03:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:31.335 12:03:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59297 00:05:31.335 12:03:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:31.335 12:03:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:31.335 killing process with pid 59297 00:05:31.335 12:03:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59297' 00:05:31.335 12:03:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59297 00:05:31.335 12:03:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59297 00:05:33.261 00:05:33.261 real 0m3.521s 00:05:33.261 user 0m3.718s 00:05:33.261 sys 0m0.577s 00:05:33.261 12:03:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.261 12:03:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:33.261 ************************************ 00:05:33.261 END TEST locking_app_on_locked_coremask 00:05:33.261 ************************************ 00:05:33.261 12:03:33 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:33.261 12:03:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.261 12:03:33 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.261 12:03:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:33.261 ************************************ 00:05:33.261 START TEST locking_overlapped_coremask 00:05:33.261 ************************************ 00:05:33.261 12:03:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:33.261 12:03:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59372 00:05:33.261 12:03:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59372 /var/tmp/spdk.sock 00:05:33.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.261 12:03:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59372 ']' 00:05:33.261 12:03:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.261 12:03:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:33.261 12:03:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.261 12:03:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:33.261 12:03:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:33.261 12:03:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:33.261 [2024-11-25 12:03:33.977612] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:05:33.261 [2024-11-25 12:03:33.977771] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59372 ] 00:05:33.261 [2024-11-25 12:03:34.144208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:33.261 [2024-11-25 12:03:34.282791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:33.261 [2024-11-25 12:03:34.283089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:33.261 [2024-11-25 12:03:34.283251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.231 12:03:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:34.231 12:03:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:34.231 12:03:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59390 00:05:34.231 12:03:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59390 /var/tmp/spdk2.sock 00:05:34.231 12:03:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:34.231 12:03:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59390 /var/tmp/spdk2.sock 00:05:34.231 12:03:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:34.231 12:03:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:34.231 12:03:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:34.231 12:03:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:34.231 12:03:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:34.231 12:03:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59390 /var/tmp/spdk2.sock 00:05:34.231 12:03:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59390 ']' 00:05:34.231 12:03:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:34.231 12:03:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:34.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:34.231 12:03:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:34.231 12:03:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:34.231 12:03:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:34.231 [2024-11-25 12:03:35.080118] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:05:34.231 [2024-11-25 12:03:35.080726] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59390 ] 00:05:34.231 [2024-11-25 12:03:35.258941] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59372 has claimed it. 00:05:34.231 [2024-11-25 12:03:35.263049] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:34.803 ERROR: process (pid: 59390) is no longer running 00:05:34.803 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59390) - No such process 00:05:34.803 12:03:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:34.803 12:03:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:34.803 12:03:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:34.803 12:03:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:34.803 12:03:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:34.803 12:03:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:34.803 12:03:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:34.803 12:03:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:34.803 12:03:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:34.803 12:03:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:34.803 12:03:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59372 00:05:34.803 12:03:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59372 ']' 00:05:34.803 12:03:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59372 00:05:34.803 12:03:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:34.803 12:03:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:34.803 12:03:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59372 00:05:34.803 12:03:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:34.803 12:03:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:34.803 12:03:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59372' 00:05:34.803 killing process with pid 59372 00:05:34.803 12:03:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59372 00:05:34.803 12:03:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59372 00:05:36.716 00:05:36.716 real 0m3.548s 00:05:36.716 user 0m9.474s 00:05:36.716 sys 0m0.584s 00:05:36.716 12:03:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.716 ************************************ 00:05:36.716 END TEST locking_overlapped_coremask 00:05:36.716 ************************************ 00:05:36.716 12:03:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:36.716 12:03:37 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:36.716 12:03:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:36.716 12:03:37 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.716 12:03:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:36.716 ************************************ 00:05:36.716 START TEST locking_overlapped_coremask_via_rpc 00:05:36.716 ************************************ 00:05:36.716 12:03:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:36.716 12:03:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59448 00:05:36.716 12:03:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59448 /var/tmp/spdk.sock 00:05:36.716 12:03:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59448 ']' 00:05:36.716 12:03:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.716 12:03:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:36.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.716 12:03:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:36.716 12:03:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.716 12:03:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:36.716 12:03:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.716 [2024-11-25 12:03:37.593121] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:05:36.716 [2024-11-25 12:03:37.593274] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59448 ] 00:05:36.716 [2024-11-25 12:03:37.753389] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:36.716 [2024-11-25 12:03:37.753462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:36.976 [2024-11-25 12:03:37.900517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:36.976 [2024-11-25 12:03:37.901102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:36.976 [2024-11-25 12:03:37.901352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.924 12:03:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:37.924 12:03:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:37.924 12:03:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59471 00:05:37.924 12:03:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59471 /var/tmp/spdk2.sock 00:05:37.924 12:03:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59471 ']' 00:05:37.924 12:03:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:37.924 12:03:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:37.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:37.924 12:03:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:37.924 12:03:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:37.924 12:03:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:37.924 12:03:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.924 [2024-11-25 12:03:38.738688] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:05:37.924 [2024-11-25 12:03:38.738875] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59471 ] 00:05:37.924 [2024-11-25 12:03:38.925484] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:37.924 [2024-11-25 12:03:38.925565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:38.220 [2024-11-25 12:03:39.216678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:38.220 [2024-11-25 12:03:39.220268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:38.220 [2024-11-25 12:03:39.220321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:40.764 12:03:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.764 12:03:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:40.764 12:03:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:40.764 12:03:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.764 12:03:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.764 12:03:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.764 12:03:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:40.764 12:03:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:40.764 12:03:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:40.764 12:03:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:40.764 12:03:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:40.764 12:03:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:40.764 12:03:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:40.764 12:03:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:40.764 12:03:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.764 12:03:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.764 [2024-11-25 12:03:41.348192] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59448 has claimed it. 00:05:40.764 request: 00:05:40.764 { 00:05:40.764 "method": "framework_enable_cpumask_locks", 00:05:40.764 "req_id": 1 00:05:40.764 } 00:05:40.764 Got JSON-RPC error response 00:05:40.764 response: 00:05:40.764 { 00:05:40.764 "code": -32603, 00:05:40.764 "message": "Failed to claim CPU core: 2" 00:05:40.764 } 00:05:40.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.764 12:03:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:40.764 12:03:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:40.764 12:03:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:40.764 12:03:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:40.765 12:03:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:40.765 12:03:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59448 /var/tmp/spdk.sock 00:05:40.765 12:03:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59448 ']' 00:05:40.765 12:03:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.765 12:03:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:40.765 12:03:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.765 12:03:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:40.765 12:03:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:40.765 12:03:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.765 12:03:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:40.765 12:03:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59471 /var/tmp/spdk2.sock 00:05:40.765 12:03:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59471 ']' 00:05:40.765 12:03:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:40.765 12:03:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:40.765 12:03:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:40.765 12:03:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:40.765 12:03:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.765 ************************************ 00:05:40.765 END TEST locking_overlapped_coremask_via_rpc 00:05:40.765 ************************************ 00:05:40.765 12:03:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.765 12:03:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:40.765 12:03:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:40.765 12:03:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:40.765 12:03:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:40.765 12:03:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:40.765 00:05:40.765 real 0m4.312s 00:05:40.765 user 0m1.350s 00:05:40.765 sys 0m0.202s 00:05:40.765 12:03:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.765 12:03:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.025 12:03:41 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:41.025 12:03:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59448 ]] 00:05:41.025 12:03:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59448 00:05:41.025 12:03:41 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59448 ']' 00:05:41.025 12:03:41 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59448 00:05:41.025 12:03:41 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:41.025 12:03:41 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:41.025 12:03:41 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59448 00:05:41.025 killing process with pid 59448 00:05:41.025 12:03:41 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:41.025 12:03:41 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:41.025 12:03:41 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59448' 00:05:41.025 12:03:41 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59448 00:05:41.025 12:03:41 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59448 00:05:42.968 12:03:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59471 ]] 00:05:42.968 12:03:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59471 00:05:42.968 12:03:43 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59471 ']' 00:05:42.968 12:03:43 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59471 00:05:42.968 12:03:43 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:42.968 12:03:43 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:42.968 12:03:43 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59471 00:05:42.968 killing process with pid 59471 00:05:42.968 12:03:43 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:42.968 12:03:43 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:42.969 12:03:43 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59471' 00:05:42.969 12:03:43 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59471 00:05:42.969 12:03:43 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59471 00:05:44.353 12:03:45 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:44.353 12:03:45 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:44.353 12:03:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59448 ]] 00:05:44.353 12:03:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59448 00:05:44.353 12:03:45 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59448 ']' 00:05:44.353 Process with pid 59448 is not found 00:05:44.353 12:03:45 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59448 00:05:44.353 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59448) - No such process 00:05:44.353 12:03:45 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59448 is not found' 00:05:44.353 12:03:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59471 ]] 00:05:44.353 12:03:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59471 00:05:44.353 12:03:45 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59471 ']' 00:05:44.353 Process with pid 59471 is not found 00:05:44.353 12:03:45 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59471 00:05:44.353 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59471) - No such process 00:05:44.353 12:03:45 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59471 is not found' 00:05:44.353 12:03:45 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:44.353 00:05:44.353 real 0m36.835s 00:05:44.353 user 1m6.149s 00:05:44.353 sys 0m5.409s 00:05:44.353 12:03:45 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.353 12:03:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:44.353 ************************************ 00:05:44.353 END TEST cpu_locks 00:05:44.353 ************************************ 00:05:44.353 00:05:44.353 real 1m5.796s 00:05:44.353 user 2m2.534s 00:05:44.353 sys 0m8.919s 00:05:44.353 12:03:45 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.353 ************************************ 00:05:44.353 END TEST event 00:05:44.353 ************************************ 00:05:44.353 12:03:45 event -- common/autotest_common.sh@10 -- # set +x 00:05:44.614 12:03:45 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:44.614 12:03:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:44.614 12:03:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.614 12:03:45 -- common/autotest_common.sh@10 -- # set +x 00:05:44.614 ************************************ 00:05:44.614 START TEST thread 00:05:44.614 ************************************ 00:05:44.614 12:03:45 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:44.614 * Looking for test storage... 00:05:44.614 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:44.614 12:03:45 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:44.614 12:03:45 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:05:44.614 12:03:45 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:44.614 12:03:45 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:44.614 12:03:45 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:44.614 12:03:45 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:44.614 12:03:45 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:44.614 12:03:45 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.614 12:03:45 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:44.614 12:03:45 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:44.614 12:03:45 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:44.614 12:03:45 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:44.614 12:03:45 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:44.614 12:03:45 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:44.614 12:03:45 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:44.614 12:03:45 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:44.614 12:03:45 thread -- scripts/common.sh@345 -- # : 1 00:05:44.614 12:03:45 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:44.614 12:03:45 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.614 12:03:45 thread -- scripts/common.sh@365 -- # decimal 1 00:05:44.614 12:03:45 thread -- scripts/common.sh@353 -- # local d=1 00:05:44.614 12:03:45 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.614 12:03:45 thread -- scripts/common.sh@355 -- # echo 1 00:05:44.614 12:03:45 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:44.614 12:03:45 thread -- scripts/common.sh@366 -- # decimal 2 00:05:44.614 12:03:45 thread -- scripts/common.sh@353 -- # local d=2 00:05:44.614 12:03:45 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.614 12:03:45 thread -- scripts/common.sh@355 -- # echo 2 00:05:44.614 12:03:45 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:44.614 12:03:45 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:44.614 12:03:45 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:44.614 12:03:45 thread -- scripts/common.sh@368 -- # return 0 00:05:44.614 12:03:45 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.614 12:03:45 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:44.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.614 --rc genhtml_branch_coverage=1 00:05:44.614 --rc genhtml_function_coverage=1 00:05:44.614 --rc genhtml_legend=1 00:05:44.614 --rc geninfo_all_blocks=1 00:05:44.614 --rc geninfo_unexecuted_blocks=1 00:05:44.614 00:05:44.614 ' 00:05:44.614 12:03:45 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:44.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.614 --rc genhtml_branch_coverage=1 00:05:44.614 --rc genhtml_function_coverage=1 00:05:44.614 --rc genhtml_legend=1 00:05:44.614 --rc geninfo_all_blocks=1 00:05:44.614 --rc geninfo_unexecuted_blocks=1 00:05:44.614 00:05:44.614 ' 00:05:44.614 12:03:45 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:44.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.614 --rc genhtml_branch_coverage=1 00:05:44.614 --rc genhtml_function_coverage=1 00:05:44.614 --rc genhtml_legend=1 00:05:44.614 --rc geninfo_all_blocks=1 00:05:44.614 --rc geninfo_unexecuted_blocks=1 00:05:44.614 00:05:44.614 ' 00:05:44.614 12:03:45 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:44.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.614 --rc genhtml_branch_coverage=1 00:05:44.614 --rc genhtml_function_coverage=1 00:05:44.614 --rc genhtml_legend=1 00:05:44.614 --rc geninfo_all_blocks=1 00:05:44.614 --rc geninfo_unexecuted_blocks=1 00:05:44.614 00:05:44.614 ' 00:05:44.614 12:03:45 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:44.614 12:03:45 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:44.614 12:03:45 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.614 12:03:45 thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.614 ************************************ 00:05:44.614 START TEST thread_poller_perf 00:05:44.614 ************************************ 00:05:44.614 12:03:45 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:44.875 [2024-11-25 12:03:45.707402] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:05:44.876 [2024-11-25 12:03:45.708098] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59650 ] 00:05:44.876 [2024-11-25 12:03:45.864652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.139 [2024-11-25 12:03:46.002153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.139 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:46.529 [2024-11-25T12:03:47.609Z] ====================================== 00:05:46.529 [2024-11-25T12:03:47.609Z] busy:2614483328 (cyc) 00:05:46.529 [2024-11-25T12:03:47.609Z] total_run_count: 303000 00:05:46.529 [2024-11-25T12:03:47.609Z] tsc_hz: 2600000000 (cyc) 00:05:46.529 [2024-11-25T12:03:47.609Z] ====================================== 00:05:46.529 [2024-11-25T12:03:47.609Z] poller_cost: 8628 (cyc), 3318 (nsec) 00:05:46.529 00:05:46.529 real 0m1.514s 00:05:46.529 user 0m1.317s 00:05:46.529 sys 0m0.087s 00:05:46.529 ************************************ 00:05:46.529 12:03:47 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.529 12:03:47 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:46.529 END TEST thread_poller_perf 00:05:46.529 ************************************ 00:05:46.529 12:03:47 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:46.529 12:03:47 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:46.529 12:03:47 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.529 12:03:47 thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.529 ************************************ 00:05:46.529 START TEST thread_poller_perf 00:05:46.529 ************************************ 00:05:46.529 12:03:47 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:46.529 [2024-11-25 12:03:47.294064] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:05:46.529 [2024-11-25 12:03:47.294577] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59687 ] 00:05:46.529 [2024-11-25 12:03:47.460139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.529 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:46.529 [2024-11-25 12:03:47.599282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.918 [2024-11-25T12:03:48.998Z] ====================================== 00:05:47.918 [2024-11-25T12:03:48.998Z] busy:2603788426 (cyc) 00:05:47.918 [2024-11-25T12:03:48.998Z] total_run_count: 3952000 00:05:47.918 [2024-11-25T12:03:48.998Z] tsc_hz: 2600000000 (cyc) 00:05:47.918 [2024-11-25T12:03:48.998Z] ====================================== 00:05:47.918 [2024-11-25T12:03:48.998Z] poller_cost: 658 (cyc), 253 (nsec) 00:05:47.918 00:05:47.918 real 0m1.521s 00:05:47.918 user 0m1.321s 00:05:47.918 sys 0m0.088s 00:05:47.918 12:03:48 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.918 ************************************ 00:05:47.918 END TEST thread_poller_perf 00:05:47.918 ************************************ 00:05:47.918 12:03:48 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:47.918 12:03:48 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:47.918 00:05:47.918 real 0m3.344s 00:05:47.918 user 0m2.757s 00:05:47.918 sys 0m0.315s 00:05:47.918 12:03:48 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.918 ************************************ 00:05:47.918 END TEST thread 00:05:47.918 ************************************ 00:05:47.918 12:03:48 thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.918 12:03:48 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:47.918 12:03:48 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:47.918 12:03:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:47.918 12:03:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.918 12:03:48 -- common/autotest_common.sh@10 -- # set +x 00:05:47.918 ************************************ 00:05:47.918 START TEST app_cmdline 00:05:47.918 ************************************ 00:05:47.918 12:03:48 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:47.918 * Looking for test storage... 00:05:47.918 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:47.918 12:03:48 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:47.918 12:03:48 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:05:47.918 12:03:48 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:48.179 12:03:49 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:48.179 12:03:49 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:48.179 12:03:49 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:48.179 12:03:49 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:48.179 12:03:49 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:48.179 12:03:49 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:48.179 12:03:49 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:48.179 12:03:49 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:48.179 12:03:49 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:48.179 12:03:49 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:48.179 12:03:49 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:48.179 12:03:49 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:48.179 12:03:49 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:48.179 12:03:49 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:48.179 12:03:49 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:48.179 12:03:49 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:48.179 12:03:49 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:48.179 12:03:49 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:48.179 12:03:49 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:48.179 12:03:49 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:48.179 12:03:49 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:48.179 12:03:49 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:48.179 12:03:49 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:48.179 12:03:49 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:48.179 12:03:49 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:48.179 12:03:49 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:48.179 12:03:49 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:48.179 12:03:49 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:48.179 12:03:49 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:48.179 12:03:49 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:48.179 12:03:49 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:48.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.179 --rc genhtml_branch_coverage=1 00:05:48.179 --rc genhtml_function_coverage=1 00:05:48.179 --rc genhtml_legend=1 00:05:48.179 --rc geninfo_all_blocks=1 00:05:48.179 --rc geninfo_unexecuted_blocks=1 00:05:48.179 00:05:48.179 ' 00:05:48.179 12:03:49 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:48.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.179 --rc genhtml_branch_coverage=1 00:05:48.179 --rc genhtml_function_coverage=1 00:05:48.179 --rc genhtml_legend=1 00:05:48.179 --rc geninfo_all_blocks=1 00:05:48.179 --rc geninfo_unexecuted_blocks=1 00:05:48.179 00:05:48.179 ' 00:05:48.179 12:03:49 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:48.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.179 --rc genhtml_branch_coverage=1 00:05:48.179 --rc genhtml_function_coverage=1 00:05:48.179 --rc genhtml_legend=1 00:05:48.179 --rc geninfo_all_blocks=1 00:05:48.179 --rc geninfo_unexecuted_blocks=1 00:05:48.179 00:05:48.179 ' 00:05:48.179 12:03:49 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:48.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.179 --rc genhtml_branch_coverage=1 00:05:48.179 --rc genhtml_function_coverage=1 00:05:48.179 --rc genhtml_legend=1 00:05:48.179 --rc geninfo_all_blocks=1 00:05:48.179 --rc geninfo_unexecuted_blocks=1 00:05:48.179 00:05:48.179 ' 00:05:48.179 12:03:49 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:48.179 12:03:49 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59776 00:05:48.179 12:03:49 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59776 00:05:48.179 12:03:49 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59776 ']' 00:05:48.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.179 12:03:49 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.179 12:03:49 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:48.179 12:03:49 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.179 12:03:49 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:48.179 12:03:49 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:48.179 12:03:49 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:48.179 [2024-11-25 12:03:49.145087] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:05:48.179 [2024-11-25 12:03:49.145242] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59776 ] 00:05:48.440 [2024-11-25 12:03:49.311216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.440 [2024-11-25 12:03:49.462875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.505 12:03:50 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:49.505 12:03:50 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:49.505 12:03:50 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:05:49.505 { 00:05:49.505 "version": "SPDK v25.01-pre git sha1 393e80fcd", 00:05:49.505 "fields": { 00:05:49.505 "major": 25, 00:05:49.505 "minor": 1, 00:05:49.505 "patch": 0, 00:05:49.505 "suffix": "-pre", 00:05:49.505 "commit": "393e80fcd" 00:05:49.505 } 00:05:49.505 } 00:05:49.505 12:03:50 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:49.505 12:03:50 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:49.505 12:03:50 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:49.505 12:03:50 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:49.505 12:03:50 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:49.505 12:03:50 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:49.505 12:03:50 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.505 12:03:50 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:49.505 12:03:50 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:49.505 12:03:50 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.505 12:03:50 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:49.505 12:03:50 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:49.505 12:03:50 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:49.505 12:03:50 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:49.505 12:03:50 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:49.505 12:03:50 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:49.505 12:03:50 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:49.505 12:03:50 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:49.505 12:03:50 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:49.505 12:03:50 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:49.505 12:03:50 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:49.506 12:03:50 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:49.506 12:03:50 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:05:49.506 12:03:50 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:49.767 request: 00:05:49.767 { 00:05:49.768 "method": "env_dpdk_get_mem_stats", 00:05:49.768 "req_id": 1 00:05:49.768 } 00:05:49.768 Got JSON-RPC error response 00:05:49.768 response: 00:05:49.768 { 00:05:49.768 "code": -32601, 00:05:49.768 "message": "Method not found" 00:05:49.768 } 00:05:49.768 12:03:50 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:49.768 12:03:50 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:49.768 12:03:50 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:49.768 12:03:50 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:49.768 12:03:50 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59776 00:05:49.768 12:03:50 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59776 ']' 00:05:49.768 12:03:50 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59776 00:05:49.768 12:03:50 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:49.768 12:03:50 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:49.768 12:03:50 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59776 00:05:49.768 killing process with pid 59776 00:05:49.768 12:03:50 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:49.768 12:03:50 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:49.768 12:03:50 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59776' 00:05:49.768 12:03:50 app_cmdline -- common/autotest_common.sh@973 -- # kill 59776 00:05:49.768 12:03:50 app_cmdline -- common/autotest_common.sh@978 -- # wait 59776 00:05:51.686 00:05:51.686 real 0m3.618s 00:05:51.686 user 0m3.819s 00:05:51.686 sys 0m0.624s 00:05:51.686 ************************************ 00:05:51.687 END TEST app_cmdline 00:05:51.687 ************************************ 00:05:51.687 12:03:52 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.687 12:03:52 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:51.687 12:03:52 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:51.687 12:03:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.687 12:03:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.687 12:03:52 -- common/autotest_common.sh@10 -- # set +x 00:05:51.687 ************************************ 00:05:51.687 START TEST version 00:05:51.687 ************************************ 00:05:51.687 12:03:52 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:51.687 * Looking for test storage... 00:05:51.687 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:51.687 12:03:52 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:51.687 12:03:52 version -- common/autotest_common.sh@1693 -- # lcov --version 00:05:51.687 12:03:52 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:51.947 12:03:52 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:51.947 12:03:52 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:51.947 12:03:52 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:51.947 12:03:52 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:51.947 12:03:52 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:51.947 12:03:52 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:51.947 12:03:52 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:51.947 12:03:52 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:51.947 12:03:52 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:51.947 12:03:52 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:51.947 12:03:52 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:51.947 12:03:52 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:51.947 12:03:52 version -- scripts/common.sh@344 -- # case "$op" in 00:05:51.947 12:03:52 version -- scripts/common.sh@345 -- # : 1 00:05:51.947 12:03:52 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:51.947 12:03:52 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:51.947 12:03:52 version -- scripts/common.sh@365 -- # decimal 1 00:05:51.947 12:03:52 version -- scripts/common.sh@353 -- # local d=1 00:05:51.947 12:03:52 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:51.947 12:03:52 version -- scripts/common.sh@355 -- # echo 1 00:05:51.947 12:03:52 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:51.947 12:03:52 version -- scripts/common.sh@366 -- # decimal 2 00:05:51.947 12:03:52 version -- scripts/common.sh@353 -- # local d=2 00:05:51.947 12:03:52 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:51.947 12:03:52 version -- scripts/common.sh@355 -- # echo 2 00:05:51.947 12:03:52 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:51.947 12:03:52 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:51.947 12:03:52 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:51.947 12:03:52 version -- scripts/common.sh@368 -- # return 0 00:05:51.947 12:03:52 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:51.947 12:03:52 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:51.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.947 --rc genhtml_branch_coverage=1 00:05:51.947 --rc genhtml_function_coverage=1 00:05:51.947 --rc genhtml_legend=1 00:05:51.947 --rc geninfo_all_blocks=1 00:05:51.947 --rc geninfo_unexecuted_blocks=1 00:05:51.947 00:05:51.947 ' 00:05:51.947 12:03:52 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:51.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.947 --rc genhtml_branch_coverage=1 00:05:51.947 --rc genhtml_function_coverage=1 00:05:51.947 --rc genhtml_legend=1 00:05:51.947 --rc geninfo_all_blocks=1 00:05:51.947 --rc geninfo_unexecuted_blocks=1 00:05:51.947 00:05:51.947 ' 00:05:51.947 12:03:52 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:51.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.947 --rc genhtml_branch_coverage=1 00:05:51.947 --rc genhtml_function_coverage=1 00:05:51.947 --rc genhtml_legend=1 00:05:51.947 --rc geninfo_all_blocks=1 00:05:51.947 --rc geninfo_unexecuted_blocks=1 00:05:51.947 00:05:51.947 ' 00:05:51.947 12:03:52 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:51.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.947 --rc genhtml_branch_coverage=1 00:05:51.947 --rc genhtml_function_coverage=1 00:05:51.947 --rc genhtml_legend=1 00:05:51.947 --rc geninfo_all_blocks=1 00:05:51.947 --rc geninfo_unexecuted_blocks=1 00:05:51.947 00:05:51.947 ' 00:05:51.947 12:03:52 version -- app/version.sh@17 -- # get_header_version major 00:05:51.947 12:03:52 version -- app/version.sh@14 -- # cut -f2 00:05:51.948 12:03:52 version -- app/version.sh@14 -- # tr -d '"' 00:05:51.948 12:03:52 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:51.948 12:03:52 version -- app/version.sh@17 -- # major=25 00:05:51.948 12:03:52 version -- app/version.sh@18 -- # get_header_version minor 00:05:51.948 12:03:52 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:51.948 12:03:52 version -- app/version.sh@14 -- # cut -f2 00:05:51.948 12:03:52 version -- app/version.sh@14 -- # tr -d '"' 00:05:51.948 12:03:52 version -- app/version.sh@18 -- # minor=1 00:05:51.948 12:03:52 version -- app/version.sh@19 -- # get_header_version patch 00:05:51.948 12:03:52 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:51.948 12:03:52 version -- app/version.sh@14 -- # cut -f2 00:05:51.948 12:03:52 version -- app/version.sh@14 -- # tr -d '"' 00:05:51.948 12:03:52 version -- app/version.sh@19 -- # patch=0 00:05:51.948 12:03:52 version -- app/version.sh@20 -- # get_header_version suffix 00:05:51.948 12:03:52 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:51.948 12:03:52 version -- app/version.sh@14 -- # cut -f2 00:05:51.948 12:03:52 version -- app/version.sh@14 -- # tr -d '"' 00:05:51.948 12:03:52 version -- app/version.sh@20 -- # suffix=-pre 00:05:51.948 12:03:52 version -- app/version.sh@22 -- # version=25.1 00:05:51.948 12:03:52 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:51.948 12:03:52 version -- app/version.sh@28 -- # version=25.1rc0 00:05:51.948 12:03:52 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:51.948 12:03:52 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:51.948 12:03:52 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:51.948 12:03:52 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:51.948 00:05:51.948 real 0m0.233s 00:05:51.948 user 0m0.138s 00:05:51.948 sys 0m0.121s 00:05:51.948 ************************************ 00:05:51.948 END TEST version 00:05:51.948 ************************************ 00:05:51.948 12:03:52 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.948 12:03:52 version -- common/autotest_common.sh@10 -- # set +x 00:05:51.948 12:03:52 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:51.948 12:03:52 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:51.948 12:03:52 -- spdk/autotest.sh@194 -- # uname -s 00:05:51.948 12:03:52 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:51.948 12:03:52 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:51.948 12:03:52 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:51.948 12:03:52 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:05:51.948 12:03:52 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:05:51.948 12:03:52 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:51.948 12:03:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.948 12:03:52 -- common/autotest_common.sh@10 -- # set +x 00:05:51.948 ************************************ 00:05:51.948 START TEST blockdev_nvme 00:05:51.948 ************************************ 00:05:51.948 12:03:52 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:05:51.948 * Looking for test storage... 00:05:51.948 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:05:51.948 12:03:52 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:51.948 12:03:52 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:05:51.948 12:03:52 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:52.209 12:03:53 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:52.209 12:03:53 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:52.209 12:03:53 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:52.209 12:03:53 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:52.209 12:03:53 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:05:52.209 12:03:53 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:05:52.209 12:03:53 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:05:52.209 12:03:53 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:05:52.209 12:03:53 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:05:52.209 12:03:53 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:05:52.209 12:03:53 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:05:52.209 12:03:53 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:52.209 12:03:53 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:05:52.209 12:03:53 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:05:52.209 12:03:53 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:52.209 12:03:53 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:52.209 12:03:53 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:05:52.209 12:03:53 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:05:52.209 12:03:53 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:52.209 12:03:53 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:05:52.209 12:03:53 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:05:52.209 12:03:53 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:05:52.209 12:03:53 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:05:52.209 12:03:53 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:52.209 12:03:53 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:05:52.209 12:03:53 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:05:52.210 12:03:53 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:52.210 12:03:53 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:52.210 12:03:53 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:05:52.210 12:03:53 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:52.210 12:03:53 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:52.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.210 --rc genhtml_branch_coverage=1 00:05:52.210 --rc genhtml_function_coverage=1 00:05:52.210 --rc genhtml_legend=1 00:05:52.210 --rc geninfo_all_blocks=1 00:05:52.210 --rc geninfo_unexecuted_blocks=1 00:05:52.210 00:05:52.210 ' 00:05:52.210 12:03:53 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:52.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.210 --rc genhtml_branch_coverage=1 00:05:52.210 --rc genhtml_function_coverage=1 00:05:52.210 --rc genhtml_legend=1 00:05:52.210 --rc geninfo_all_blocks=1 00:05:52.210 --rc geninfo_unexecuted_blocks=1 00:05:52.210 00:05:52.210 ' 00:05:52.210 12:03:53 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:52.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.210 --rc genhtml_branch_coverage=1 00:05:52.210 --rc genhtml_function_coverage=1 00:05:52.210 --rc genhtml_legend=1 00:05:52.210 --rc geninfo_all_blocks=1 00:05:52.210 --rc geninfo_unexecuted_blocks=1 00:05:52.210 00:05:52.210 ' 00:05:52.210 12:03:53 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:52.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.210 --rc genhtml_branch_coverage=1 00:05:52.210 --rc genhtml_function_coverage=1 00:05:52.210 --rc genhtml_legend=1 00:05:52.210 --rc geninfo_all_blocks=1 00:05:52.210 --rc geninfo_unexecuted_blocks=1 00:05:52.210 00:05:52.210 ' 00:05:52.210 12:03:53 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:52.210 12:03:53 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:05:52.210 12:03:53 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:05:52.210 12:03:53 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:05:52.210 12:03:53 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:05:52.210 12:03:53 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:05:52.210 12:03:53 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:05:52.210 12:03:53 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:05:52.210 12:03:53 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:05:52.210 12:03:53 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:05:52.210 12:03:53 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:05:52.210 12:03:53 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:05:52.210 12:03:53 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:05:52.210 12:03:53 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:05:52.210 12:03:53 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:05:52.210 12:03:53 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:05:52.210 12:03:53 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:05:52.210 12:03:53 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:05:52.210 12:03:53 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:05:52.210 12:03:53 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:05:52.210 12:03:53 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:05:52.210 12:03:53 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:05:52.210 12:03:53 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:05:52.210 12:03:53 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:05:52.210 12:03:53 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=59959 00:05:52.210 12:03:53 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:05:52.210 12:03:53 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:05:52.210 12:03:53 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 59959 00:05:52.210 12:03:53 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 59959 ']' 00:05:52.210 12:03:53 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.210 12:03:53 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:52.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.210 12:03:53 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.210 12:03:53 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:52.210 12:03:53 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:52.210 [2024-11-25 12:03:53.192256] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:05:52.210 [2024-11-25 12:03:53.192679] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59959 ] 00:05:52.471 [2024-11-25 12:03:53.359088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.471 [2024-11-25 12:03:53.495669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.496 12:03:54 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:53.496 12:03:54 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:05:53.496 12:03:54 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:05:53.496 12:03:54 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:05:53.496 12:03:54 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:05:53.496 12:03:54 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:05:53.496 12:03:54 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:53.496 12:03:54 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:05:53.496 12:03:54 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.496 12:03:54 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:53.759 12:03:54 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.759 12:03:54 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:05:53.759 12:03:54 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.759 12:03:54 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:53.759 12:03:54 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.759 12:03:54 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:05:53.759 12:03:54 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:05:53.759 12:03:54 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.759 12:03:54 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:53.759 12:03:54 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.759 12:03:54 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:05:53.759 12:03:54 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.759 12:03:54 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:53.759 12:03:54 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.759 12:03:54 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:05:53.759 12:03:54 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.759 12:03:54 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:53.759 12:03:54 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.759 12:03:54 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:05:53.759 12:03:54 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:05:53.759 12:03:54 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:05:53.759 12:03:54 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.759 12:03:54 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:53.759 12:03:54 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.759 12:03:54 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:05:53.759 12:03:54 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:05:53.760 12:03:54 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "2dafb843-dd41-48e1-a42a-02c4e4df6310"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "2dafb843-dd41-48e1-a42a-02c4e4df6310",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "979e5f8b-abf0-48c3-832b-f672a514733c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "979e5f8b-abf0-48c3-832b-f672a514733c",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "d05df9cd-b1f9-4491-814c-6d574c5f4842"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d05df9cd-b1f9-4491-814c-6d574c5f4842",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "944613f4-f847-4b83-ac44-16ab62fbb960"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "944613f4-f847-4b83-ac44-16ab62fbb960",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "5385ba62-f28c-4f5c-9c04-23d39acdd824"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5385ba62-f28c-4f5c-9c04-23d39acdd824",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "d6139559-3eb9-490d-bc95-645cad4813de"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "d6139559-3eb9-490d-bc95-645cad4813de",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:05:53.760 12:03:54 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:05:53.760 12:03:54 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:05:53.760 12:03:54 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:05:53.760 12:03:54 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 59959 00:05:53.760 12:03:54 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 59959 ']' 00:05:53.760 12:03:54 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 59959 00:05:53.760 12:03:54 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:05:53.760 12:03:54 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:53.760 12:03:54 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59959 00:05:53.760 killing process with pid 59959 00:05:53.760 12:03:54 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:53.760 12:03:54 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:53.760 12:03:54 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59959' 00:05:53.760 12:03:54 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 59959 00:05:53.760 12:03:54 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 59959 00:05:55.678 12:03:56 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:05:55.678 12:03:56 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:05:55.678 12:03:56 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:05:55.678 12:03:56 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.678 12:03:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:55.678 ************************************ 00:05:55.678 START TEST bdev_hello_world 00:05:55.678 ************************************ 00:05:55.678 12:03:56 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:05:55.678 [2024-11-25 12:03:56.601201] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:05:55.678 [2024-11-25 12:03:56.601402] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60043 ] 00:05:55.939 [2024-11-25 12:03:56.771998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.939 [2024-11-25 12:03:56.913756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.513 [2024-11-25 12:03:57.511565] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:05:56.513 [2024-11-25 12:03:57.511852] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:05:56.513 [2024-11-25 12:03:57.511886] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:05:56.513 [2024-11-25 12:03:57.514686] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:05:56.513 [2024-11-25 12:03:57.515741] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:05:56.513 [2024-11-25 12:03:57.515786] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:05:56.513 [2024-11-25 12:03:57.516353] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:05:56.513 00:05:56.513 [2024-11-25 12:03:57.516393] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:05:57.455 00:05:57.455 real 0m1.812s 00:05:57.455 user 0m1.439s 00:05:57.455 sys 0m0.259s 00:05:57.455 12:03:58 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:57.455 ************************************ 00:05:57.455 END TEST bdev_hello_world 00:05:57.455 ************************************ 00:05:57.455 12:03:58 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:05:57.455 12:03:58 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:05:57.455 12:03:58 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:57.455 12:03:58 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:57.455 12:03:58 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:57.455 ************************************ 00:05:57.455 START TEST bdev_bounds 00:05:57.455 ************************************ 00:05:57.455 12:03:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:05:57.455 12:03:58 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=60085 00:05:57.455 12:03:58 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:05:57.455 Process bdevio pid: 60085 00:05:57.455 12:03:58 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 60085' 00:05:57.455 12:03:58 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 60085 00:05:57.455 12:03:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 60085 ']' 00:05:57.455 12:03:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.455 12:03:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:57.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.455 12:03:58 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:05:57.455 12:03:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.455 12:03:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:57.455 12:03:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:05:57.455 [2024-11-25 12:03:58.479334] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:05:57.455 [2024-11-25 12:03:58.479492] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60085 ] 00:05:57.717 [2024-11-25 12:03:58.643500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:57.717 [2024-11-25 12:03:58.791474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.717 [2024-11-25 12:03:58.791774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.717 [2024-11-25 12:03:58.791788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:58.661 12:03:59 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:58.661 12:03:59 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:05:58.661 12:03:59 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:05:58.661 I/O targets: 00:05:58.661 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:05:58.661 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:05:58.661 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:05:58.662 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:05:58.662 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:05:58.662 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:05:58.662 00:05:58.662 00:05:58.662 CUnit - A unit testing framework for C - Version 2.1-3 00:05:58.662 http://cunit.sourceforge.net/ 00:05:58.662 00:05:58.662 00:05:58.662 Suite: bdevio tests on: Nvme3n1 00:05:58.662 Test: blockdev write read block ...passed 00:05:58.662 Test: blockdev write zeroes read block ...passed 00:05:58.662 Test: blockdev write zeroes read no split ...passed 00:05:58.662 Test: blockdev write zeroes read split ...passed 00:05:58.662 Test: blockdev write zeroes read split partial ...passed 00:05:58.662 Test: blockdev reset ...[2024-11-25 12:03:59.631786] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:05:58.662 [2024-11-25 12:03:59.635496] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:05:58.662 passed 00:05:58.662 Test: blockdev write read 8 blocks ...passed 00:05:58.662 Test: blockdev write read size > 128k ...passed 00:05:58.662 Test: blockdev write read invalid size ...passed 00:05:58.662 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:58.662 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:58.662 Test: blockdev write read max offset ...passed 00:05:58.662 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:58.662 Test: blockdev writev readv 8 blocks ...passed 00:05:58.662 Test: blockdev writev readv 30 x 1block ...passed 00:05:58.662 Test: blockdev writev readv block ...passed 00:05:58.662 Test: blockdev writev readv size > 128k ...passed 00:05:58.662 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:58.662 Test: blockdev comparev and writev ...[2024-11-25 12:03:59.649184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ba00a000 len:0x1000 00:05:58.662 [2024-11-25 12:03:59.649271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:58.662 passed 00:05:58.662 Test: blockdev nvme passthru rw ...passed 00:05:58.662 Test: blockdev nvme passthru vendor specific ...passed 00:05:58.662 Test: blockdev nvme admin passthru ...[2024-11-25 12:03:59.651059] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:58.662 [2024-11-25 12:03:59.651125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:58.662 passed 00:05:58.662 Test: blockdev copy ...passed 00:05:58.662 Suite: bdevio tests on: Nvme2n3 00:05:58.662 Test: blockdev write read block ...passed 00:05:58.662 Test: blockdev write zeroes read block ...passed 00:05:58.662 Test: blockdev write zeroes read no split ...passed 00:05:58.662 Test: blockdev write zeroes read split ...passed 00:05:58.662 Test: blockdev write zeroes read split partial ...passed 00:05:58.662 Test: blockdev reset ...[2024-11-25 12:03:59.734700] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:05:58.662 passed 00:05:58.662 Test: blockdev write read 8 blocks ...[2024-11-25 12:03:59.738470] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:05:58.924 passed 00:05:58.924 Test: blockdev write read size > 128k ...passed 00:05:58.924 Test: blockdev write read invalid size ...passed 00:05:58.924 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:58.924 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:58.924 Test: blockdev write read max offset ...passed 00:05:58.924 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:58.924 Test: blockdev writev readv 8 blocks ...passed 00:05:58.924 Test: blockdev writev readv 30 x 1block ...passed 00:05:58.924 Test: blockdev writev readv block ...passed 00:05:58.924 Test: blockdev writev readv size > 128k ...passed 00:05:58.924 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:58.924 Test: blockdev comparev and writev ...[2024-11-25 12:03:59.751533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2be406000 len:0x1000 00:05:58.924 [2024-11-25 12:03:59.751616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:58.924 passed 00:05:58.924 Test: blockdev nvme passthru rw ...passed 00:05:58.924 Test: blockdev nvme passthru vendor specific ...[2024-11-25 12:03:59.753361] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:58.924 [2024-11-25 12:03:59.753421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:58.924 passed 00:05:58.924 Test: blockdev nvme admin passthru ...passed 00:05:58.924 Test: blockdev copy ...passed 00:05:58.924 Suite: bdevio tests on: Nvme2n2 00:05:58.924 Test: blockdev write read block ...passed 00:05:58.924 Test: blockdev write zeroes read block ...passed 00:05:58.924 Test: blockdev write zeroes read no split ...passed 00:05:58.924 Test: blockdev write zeroes read split ...passed 00:05:58.924 Test: blockdev write zeroes read split partial ...passed 00:05:58.924 Test: blockdev reset ...[2024-11-25 12:03:59.837363] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:05:58.924 [2024-11-25 12:03:59.843351] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:05:58.924 passed 00:05:58.924 Test: blockdev write read 8 blocks ...passed 00:05:58.924 Test: blockdev write read size > 128k ...passed 00:05:58.924 Test: blockdev write read invalid size ...passed 00:05:58.924 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:58.924 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:58.924 Test: blockdev write read max offset ...passed 00:05:58.924 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:58.924 Test: blockdev writev readv 8 blocks ...passed 00:05:58.924 Test: blockdev writev readv 30 x 1block ...passed 00:05:58.924 Test: blockdev writev readv block ...passed 00:05:58.924 Test: blockdev writev readv size > 128k ...passed 00:05:58.924 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:58.924 Test: blockdev comparev and writev ...[2024-11-25 12:03:59.861247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d723c000 len:0x1000 00:05:58.924 [2024-11-25 12:03:59.861338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:58.924 passed 00:05:58.924 Test: blockdev nvme passthru rw ...passed 00:05:58.924 Test: blockdev nvme passthru vendor specific ...passed 00:05:58.924 Test: blockdev nvme admin passthru ...[2024-11-25 12:03:59.863852] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:58.924 [2024-11-25 12:03:59.863909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:58.924 passed 00:05:58.924 Test: blockdev copy ...passed 00:05:58.924 Suite: bdevio tests on: Nvme2n1 00:05:58.924 Test: blockdev write read block ...passed 00:05:58.924 Test: blockdev write zeroes read block ...passed 00:05:58.924 Test: blockdev write zeroes read no split ...passed 00:05:58.924 Test: blockdev write zeroes read split ...passed 00:05:58.924 Test: blockdev write zeroes read split partial ...passed 00:05:58.924 Test: blockdev reset ...[2024-11-25 12:03:59.946716] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:05:58.924 [2024-11-25 12:03:59.953025] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:05:58.924 passed 00:05:58.924 Test: blockdev write read 8 blocks ...passed 00:05:58.924 Test: blockdev write read size > 128k ...passed 00:05:58.924 Test: blockdev write read invalid size ...passed 00:05:58.924 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:58.924 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:58.924 Test: blockdev write read max offset ...passed 00:05:58.924 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:58.924 Test: blockdev writev readv 8 blocks ...passed 00:05:58.924 Test: blockdev writev readv 30 x 1block ...passed 00:05:58.924 Test: blockdev writev readv block ...passed 00:05:58.924 Test: blockdev writev readv size > 128k ...passed 00:05:58.924 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:58.924 Test: blockdev comparev and writev ...[2024-11-25 12:03:59.972862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d7238000 len:0x1000 00:05:58.924 [2024-11-25 12:03:59.972968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:58.924 passed 00:05:58.924 Test: blockdev nvme passthru rw ...passed 00:05:58.924 Test: blockdev nvme passthru vendor specific ...[2024-11-25 12:03:59.975095] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:58.924 [2024-11-25 12:03:59.975153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:58.924 passed 00:05:58.925 Test: blockdev nvme admin passthru ...passed 00:05:58.925 Test: blockdev copy ...passed 00:05:58.925 Suite: bdevio tests on: Nvme1n1 00:05:58.925 Test: blockdev write read block ...passed 00:05:59.187 Test: blockdev write zeroes read block ...passed 00:05:59.187 Test: blockdev write zeroes read no split ...passed 00:05:59.187 Test: blockdev write zeroes read split ...passed 00:05:59.187 Test: blockdev write zeroes read split partial ...passed 00:05:59.187 Test: blockdev reset ...[2024-11-25 12:04:00.084093] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:05:59.187 [2024-11-25 12:04:00.090237] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:05:59.187 passed 00:05:59.187 Test: blockdev write read 8 blocks ...passed 00:05:59.187 Test: blockdev write read size > 128k ...passed 00:05:59.187 Test: blockdev write read invalid size ...passed 00:05:59.187 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:59.187 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:59.187 Test: blockdev write read max offset ...passed 00:05:59.187 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:59.187 Test: blockdev writev readv 8 blocks ...passed 00:05:59.187 Test: blockdev writev readv 30 x 1block ...passed 00:05:59.187 Test: blockdev writev readv block ...passed 00:05:59.187 Test: blockdev writev readv size > 128k ...passed 00:05:59.187 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:59.187 Test: blockdev comparev and writev ...[2024-11-25 12:04:00.113544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d7234000 len:0x1000 00:05:59.187 [2024-11-25 12:04:00.113624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:59.187 passed 00:05:59.187 Test: blockdev nvme passthru rw ...passed 00:05:59.187 Test: blockdev nvme passthru vendor specific ...[2024-11-25 12:04:00.116880] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:59.187 [2024-11-25 12:04:00.116937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:59.187 passed 00:05:59.187 Test: blockdev nvme admin passthru ...passed 00:05:59.187 Test: blockdev copy ...passed 00:05:59.187 Suite: bdevio tests on: Nvme0n1 00:05:59.187 Test: blockdev write read block ...passed 00:05:59.187 Test: blockdev write zeroes read block ...passed 00:05:59.187 Test: blockdev write zeroes read no split ...passed 00:05:59.187 Test: blockdev write zeroes read split ...passed 00:05:59.187 Test: blockdev write zeroes read split partial ...passed 00:05:59.187 Test: blockdev reset ...[2024-11-25 12:04:00.262546] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:05:59.448 [2024-11-25 12:04:00.268008] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:05:59.448 passed 00:05:59.448 Test: blockdev write read 8 blocks ...passed 00:05:59.448 Test: blockdev write read size > 128k ...passed 00:05:59.448 Test: blockdev write read invalid size ...passed 00:05:59.448 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:59.448 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:59.448 Test: blockdev write read max offset ...passed 00:05:59.448 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:59.448 Test: blockdev writev readv 8 blocks ...passed 00:05:59.448 Test: blockdev writev readv 30 x 1block ...passed 00:05:59.448 Test: blockdev writev readv block ...passed 00:05:59.448 Test: blockdev writev readv size > 128k ...passed 00:05:59.448 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:59.448 Test: blockdev comparev and writev ...[2024-11-25 12:04:00.285973] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 spassed 00:05:59.448 Test: blockdev nvme passthru rw ...ince it has 00:05:59.448 separate metadata which is not supported yet. 00:05:59.448 passed 00:05:59.448 Test: blockdev nvme passthru vendor specific ...[2024-11-25 12:04:00.287486] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:05:59.448 [2024-11-25 12:04:00.287551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:05:59.448 passed 00:05:59.448 Test: blockdev nvme admin passthru ...passed 00:05:59.448 Test: blockdev copy ...passed 00:05:59.448 00:05:59.448 Run Summary: Type Total Ran Passed Failed Inactive 00:05:59.448 suites 6 6 n/a 0 0 00:05:59.448 tests 138 138 138 0 0 00:05:59.448 asserts 893 893 893 0 n/a 00:05:59.448 00:05:59.448 Elapsed time = 1.829 seconds 00:05:59.448 0 00:05:59.448 12:04:00 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 60085 00:05:59.448 12:04:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 60085 ']' 00:05:59.448 12:04:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 60085 00:05:59.448 12:04:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:05:59.448 12:04:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:59.448 12:04:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60085 00:05:59.448 12:04:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:59.448 killing process with pid 60085 00:05:59.448 12:04:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:59.449 12:04:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60085' 00:05:59.449 12:04:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 60085 00:05:59.449 12:04:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 60085 00:06:00.458 12:04:01 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:00.458 00:06:00.458 real 0m2.827s 00:06:00.458 user 0m7.010s 00:06:00.458 sys 0m0.432s 00:06:00.458 12:04:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.458 ************************************ 00:06:00.458 END TEST bdev_bounds 00:06:00.458 ************************************ 00:06:00.458 12:04:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:00.458 12:04:01 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:00.458 12:04:01 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:00.458 12:04:01 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.458 12:04:01 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:00.458 ************************************ 00:06:00.458 START TEST bdev_nbd 00:06:00.458 ************************************ 00:06:00.458 12:04:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:00.458 12:04:01 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:00.458 12:04:01 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:00.458 12:04:01 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.458 12:04:01 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:00.458 12:04:01 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:00.458 12:04:01 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:00.458 12:04:01 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:06:00.458 12:04:01 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:00.458 12:04:01 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:00.458 12:04:01 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:00.458 12:04:01 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:06:00.458 12:04:01 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:00.458 12:04:01 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:00.458 12:04:01 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:00.458 12:04:01 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:00.458 12:04:01 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=60145 00:06:00.458 12:04:01 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:00.458 12:04:01 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 60145 /var/tmp/spdk-nbd.sock 00:06:00.458 12:04:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 60145 ']' 00:06:00.458 12:04:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:00.458 12:04:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.458 12:04:01 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:00.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:00.458 12:04:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:00.458 12:04:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.458 12:04:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:00.458 [2024-11-25 12:04:01.384397] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:06:00.458 [2024-11-25 12:04:01.384555] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:00.723 [2024-11-25 12:04:01.546257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.723 [2024-11-25 12:04:01.690811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.293 12:04:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.293 12:04:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:06:01.293 12:04:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:01.293 12:04:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.293 12:04:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:01.293 12:04:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:01.293 12:04:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:01.293 12:04:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.293 12:04:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:01.293 12:04:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:01.293 12:04:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:01.293 12:04:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:01.293 12:04:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:01.293 12:04:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:01.293 12:04:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:01.555 12:04:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:01.555 12:04:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:01.555 12:04:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:01.555 12:04:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:01.555 12:04:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:01.555 12:04:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:01.555 12:04:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:01.555 12:04:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:01.555 12:04:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:01.555 12:04:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:01.555 12:04:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:01.555 12:04:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:01.555 1+0 records in 00:06:01.555 1+0 records out 00:06:01.555 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00126632 s, 3.2 MB/s 00:06:01.555 12:04:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:01.555 12:04:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:01.555 12:04:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:01.555 12:04:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:01.555 12:04:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:01.555 12:04:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:01.555 12:04:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:01.555 12:04:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:06:01.817 12:04:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:01.817 12:04:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:02.080 12:04:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:02.080 12:04:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:02.080 12:04:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:02.080 12:04:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:02.080 12:04:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:02.080 12:04:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:02.080 12:04:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:02.080 12:04:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:02.080 12:04:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:02.080 12:04:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:02.080 1+0 records in 00:06:02.080 1+0 records out 00:06:02.080 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00117713 s, 3.5 MB/s 00:06:02.080 12:04:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:02.080 12:04:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:02.080 12:04:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:02.080 12:04:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:02.080 12:04:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:02.080 12:04:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:02.080 12:04:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:02.080 12:04:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:02.080 12:04:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:02.080 12:04:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:02.341 12:04:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:02.341 12:04:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:06:02.341 12:04:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:02.341 12:04:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:02.341 12:04:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:02.341 12:04:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:06:02.341 12:04:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:02.341 12:04:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:02.341 12:04:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:02.341 12:04:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:02.341 1+0 records in 00:06:02.341 1+0 records out 00:06:02.341 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000858066 s, 4.8 MB/s 00:06:02.341 12:04:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:02.341 12:04:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:02.341 12:04:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:02.341 12:04:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:02.341 12:04:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:02.341 12:04:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:02.341 12:04:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:02.341 12:04:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:02.603 12:04:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:02.603 12:04:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:02.603 12:04:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:02.603 12:04:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:06:02.603 12:04:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:02.603 12:04:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:02.603 12:04:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:02.603 12:04:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:06:02.603 12:04:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:02.603 12:04:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:02.603 12:04:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:02.603 12:04:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:02.603 1+0 records in 00:06:02.603 1+0 records out 00:06:02.603 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000928655 s, 4.4 MB/s 00:06:02.603 12:04:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:02.603 12:04:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:02.603 12:04:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:02.603 12:04:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:02.603 12:04:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:02.603 12:04:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:02.603 12:04:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:02.603 12:04:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:02.866 12:04:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:02.866 12:04:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:02.866 12:04:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:02.866 12:04:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:06:02.866 12:04:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:02.866 12:04:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:02.866 12:04:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:02.866 12:04:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:06:02.866 12:04:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:02.866 12:04:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:02.866 12:04:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:02.866 12:04:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:02.866 1+0 records in 00:06:02.866 1+0 records out 00:06:02.866 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00136792 s, 3.0 MB/s 00:06:02.866 12:04:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:02.866 12:04:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:02.866 12:04:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:02.866 12:04:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:02.866 12:04:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:02.866 12:04:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:02.866 12:04:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:02.866 12:04:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:03.129 12:04:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:03.129 12:04:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:03.129 12:04:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:03.129 12:04:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:06:03.129 12:04:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:03.129 12:04:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:03.129 12:04:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:03.129 12:04:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:06:03.129 12:04:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:03.129 12:04:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:03.129 12:04:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:03.129 12:04:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:03.129 1+0 records in 00:06:03.129 1+0 records out 00:06:03.129 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000855288 s, 4.8 MB/s 00:06:03.129 12:04:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:03.129 12:04:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:03.129 12:04:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:03.129 12:04:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:03.129 12:04:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:03.129 12:04:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:03.129 12:04:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:03.129 12:04:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:03.415 12:04:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:03.415 { 00:06:03.415 "nbd_device": "/dev/nbd0", 00:06:03.415 "bdev_name": "Nvme0n1" 00:06:03.415 }, 00:06:03.415 { 00:06:03.415 "nbd_device": "/dev/nbd1", 00:06:03.415 "bdev_name": "Nvme1n1" 00:06:03.415 }, 00:06:03.415 { 00:06:03.415 "nbd_device": "/dev/nbd2", 00:06:03.415 "bdev_name": "Nvme2n1" 00:06:03.415 }, 00:06:03.415 { 00:06:03.415 "nbd_device": "/dev/nbd3", 00:06:03.415 "bdev_name": "Nvme2n2" 00:06:03.415 }, 00:06:03.415 { 00:06:03.415 "nbd_device": "/dev/nbd4", 00:06:03.415 "bdev_name": "Nvme2n3" 00:06:03.415 }, 00:06:03.415 { 00:06:03.415 "nbd_device": "/dev/nbd5", 00:06:03.415 "bdev_name": "Nvme3n1" 00:06:03.415 } 00:06:03.415 ]' 00:06:03.415 12:04:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:03.415 12:04:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:03.415 12:04:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:03.415 { 00:06:03.415 "nbd_device": "/dev/nbd0", 00:06:03.415 "bdev_name": "Nvme0n1" 00:06:03.415 }, 00:06:03.415 { 00:06:03.415 "nbd_device": "/dev/nbd1", 00:06:03.415 "bdev_name": "Nvme1n1" 00:06:03.415 }, 00:06:03.415 { 00:06:03.415 "nbd_device": "/dev/nbd2", 00:06:03.415 "bdev_name": "Nvme2n1" 00:06:03.415 }, 00:06:03.415 { 00:06:03.415 "nbd_device": "/dev/nbd3", 00:06:03.415 "bdev_name": "Nvme2n2" 00:06:03.415 }, 00:06:03.415 { 00:06:03.415 "nbd_device": "/dev/nbd4", 00:06:03.415 "bdev_name": "Nvme2n3" 00:06:03.415 }, 00:06:03.415 { 00:06:03.415 "nbd_device": "/dev/nbd5", 00:06:03.415 "bdev_name": "Nvme3n1" 00:06:03.415 } 00:06:03.415 ]' 00:06:03.416 12:04:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:06:03.416 12:04:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.416 12:04:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:06:03.416 12:04:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:03.416 12:04:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:03.416 12:04:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:03.416 12:04:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:03.416 12:04:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:03.416 12:04:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:03.416 12:04:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:03.416 12:04:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:03.416 12:04:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:03.416 12:04:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:03.416 12:04:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:03.416 12:04:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:03.416 12:04:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:03.416 12:04:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:03.676 12:04:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:03.676 12:04:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:03.676 12:04:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:03.676 12:04:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:03.676 12:04:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:03.676 12:04:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:03.676 12:04:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:03.676 12:04:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:03.676 12:04:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:03.676 12:04:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:03.938 12:04:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:03.938 12:04:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:03.938 12:04:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:03.938 12:04:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:03.938 12:04:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:03.938 12:04:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:03.938 12:04:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:03.938 12:04:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:03.938 12:04:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:03.938 12:04:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:04.199 12:04:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:04.199 12:04:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:04.199 12:04:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:04.199 12:04:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:04.199 12:04:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:04.199 12:04:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:04.199 12:04:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:04.199 12:04:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:04.199 12:04:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:04.199 12:04:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:04.461 12:04:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:04.461 12:04:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:04.461 12:04:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:04.461 12:04:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:04.461 12:04:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:04.461 12:04:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:04.461 12:04:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:04.461 12:04:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:04.461 12:04:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:04.461 12:04:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:04.723 12:04:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:04.723 12:04:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:04.723 12:04:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:04.723 12:04:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:04.723 12:04:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:04.723 12:04:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:04.723 12:04:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:04.723 12:04:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:04.723 12:04:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:04.723 12:04:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.723 12:04:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:04.986 12:04:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:04.986 12:04:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:04.986 12:04:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:04.986 12:04:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:04.986 12:04:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:04.986 12:04:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:04.986 12:04:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:04.986 12:04:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:04.986 12:04:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:04.986 12:04:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:04.986 12:04:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:04.986 12:04:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:04.986 12:04:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:04.986 12:04:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.986 12:04:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:04.986 12:04:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:04.986 12:04:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:04.986 12:04:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:04.986 12:04:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:04.986 12:04:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.986 12:04:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:04.986 12:04:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:04.986 12:04:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:04.986 12:04:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:04.986 12:04:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:04.986 12:04:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:04.986 12:04:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:04.986 12:04:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:05.250 /dev/nbd0 00:06:05.250 12:04:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:05.250 12:04:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:05.250 12:04:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:05.250 12:04:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:05.250 12:04:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:05.250 12:04:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:05.250 12:04:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:05.250 12:04:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:05.250 12:04:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:05.250 12:04:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:05.250 12:04:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:05.250 1+0 records in 00:06:05.250 1+0 records out 00:06:05.250 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000821828 s, 5.0 MB/s 00:06:05.250 12:04:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:05.250 12:04:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:05.250 12:04:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:05.250 12:04:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:05.250 12:04:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:05.250 12:04:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:05.250 12:04:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:05.250 12:04:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:06:05.510 /dev/nbd1 00:06:05.510 12:04:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:05.510 12:04:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:05.510 12:04:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:05.510 12:04:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:05.510 12:04:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:05.510 12:04:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:05.510 12:04:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:05.510 12:04:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:05.511 12:04:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:05.511 12:04:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:05.511 12:04:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:05.511 1+0 records in 00:06:05.511 1+0 records out 00:06:05.511 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00175021 s, 2.3 MB/s 00:06:05.511 12:04:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:05.511 12:04:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:05.511 12:04:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:05.511 12:04:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:05.511 12:04:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:05.511 12:04:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:05.511 12:04:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:05.511 12:04:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:06:05.771 /dev/nbd10 00:06:05.771 12:04:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:05.771 12:04:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:05.771 12:04:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:06:05.771 12:04:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:05.771 12:04:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:05.771 12:04:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:05.771 12:04:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:06:05.771 12:04:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:05.771 12:04:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:05.771 12:04:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:05.771 12:04:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:05.771 1+0 records in 00:06:05.771 1+0 records out 00:06:05.771 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00143329 s, 2.9 MB/s 00:06:05.771 12:04:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:05.771 12:04:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:05.771 12:04:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:05.771 12:04:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:05.771 12:04:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:05.771 12:04:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:05.771 12:04:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:05.771 12:04:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:06:06.034 /dev/nbd11 00:06:06.034 12:04:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:06.034 12:04:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:06.034 12:04:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:06:06.034 12:04:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:06.034 12:04:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:06.034 12:04:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:06.034 12:04:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:06:06.034 12:04:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:06.034 12:04:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:06.034 12:04:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:06.034 12:04:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:06.034 1+0 records in 00:06:06.034 1+0 records out 00:06:06.034 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00102459 s, 4.0 MB/s 00:06:06.034 12:04:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:06.034 12:04:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:06.034 12:04:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:06.034 12:04:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:06.034 12:04:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:06.034 12:04:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:06.034 12:04:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:06.034 12:04:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:06:06.359 /dev/nbd12 00:06:06.359 12:04:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:06.359 12:04:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:06.359 12:04:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:06:06.359 12:04:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:06.359 12:04:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:06.359 12:04:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:06.359 12:04:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:06:06.359 12:04:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:06.359 12:04:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:06.359 12:04:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:06.359 12:04:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:06.359 1+0 records in 00:06:06.359 1+0 records out 00:06:06.359 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00193866 s, 2.1 MB/s 00:06:06.359 12:04:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:06.359 12:04:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:06.359 12:04:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:06.359 12:04:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:06.359 12:04:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:06.359 12:04:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:06.359 12:04:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:06.359 12:04:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:06:06.619 /dev/nbd13 00:06:06.619 12:04:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:06.619 12:04:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:06.619 12:04:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:06:06.619 12:04:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:06.619 12:04:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:06.619 12:04:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:06.619 12:04:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:06:06.619 12:04:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:06.619 12:04:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:06.619 12:04:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:06.619 12:04:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:06.619 1+0 records in 00:06:06.619 1+0 records out 00:06:06.619 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00121173 s, 3.4 MB/s 00:06:06.619 12:04:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:06.619 12:04:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:06.619 12:04:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:06.619 12:04:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:06.619 12:04:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:06.619 12:04:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:06.619 12:04:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:06.619 12:04:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:06.619 12:04:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.619 12:04:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:06.881 12:04:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:06.881 { 00:06:06.881 "nbd_device": "/dev/nbd0", 00:06:06.881 "bdev_name": "Nvme0n1" 00:06:06.881 }, 00:06:06.881 { 00:06:06.881 "nbd_device": "/dev/nbd1", 00:06:06.881 "bdev_name": "Nvme1n1" 00:06:06.881 }, 00:06:06.881 { 00:06:06.881 "nbd_device": "/dev/nbd10", 00:06:06.881 "bdev_name": "Nvme2n1" 00:06:06.881 }, 00:06:06.881 { 00:06:06.881 "nbd_device": "/dev/nbd11", 00:06:06.881 "bdev_name": "Nvme2n2" 00:06:06.881 }, 00:06:06.881 { 00:06:06.881 "nbd_device": "/dev/nbd12", 00:06:06.881 "bdev_name": "Nvme2n3" 00:06:06.881 }, 00:06:06.881 { 00:06:06.881 "nbd_device": "/dev/nbd13", 00:06:06.881 "bdev_name": "Nvme3n1" 00:06:06.881 } 00:06:06.881 ]' 00:06:06.881 12:04:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:06.881 { 00:06:06.881 "nbd_device": "/dev/nbd0", 00:06:06.881 "bdev_name": "Nvme0n1" 00:06:06.881 }, 00:06:06.881 { 00:06:06.881 "nbd_device": "/dev/nbd1", 00:06:06.881 "bdev_name": "Nvme1n1" 00:06:06.881 }, 00:06:06.881 { 00:06:06.881 "nbd_device": "/dev/nbd10", 00:06:06.881 "bdev_name": "Nvme2n1" 00:06:06.881 }, 00:06:06.881 { 00:06:06.881 "nbd_device": "/dev/nbd11", 00:06:06.881 "bdev_name": "Nvme2n2" 00:06:06.881 }, 00:06:06.881 { 00:06:06.881 "nbd_device": "/dev/nbd12", 00:06:06.881 "bdev_name": "Nvme2n3" 00:06:06.881 }, 00:06:06.881 { 00:06:06.881 "nbd_device": "/dev/nbd13", 00:06:06.881 "bdev_name": "Nvme3n1" 00:06:06.881 } 00:06:06.881 ]' 00:06:06.881 12:04:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:06.881 12:04:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:06.881 /dev/nbd1 00:06:06.881 /dev/nbd10 00:06:06.881 /dev/nbd11 00:06:06.881 /dev/nbd12 00:06:06.881 /dev/nbd13' 00:06:06.881 12:04:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:06.881 /dev/nbd1 00:06:06.881 /dev/nbd10 00:06:06.881 /dev/nbd11 00:06:06.881 /dev/nbd12 00:06:06.881 /dev/nbd13' 00:06:06.881 12:04:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:06.881 12:04:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:06:06.881 12:04:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:06:06.881 12:04:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:06:06.881 12:04:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:06:06.881 12:04:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:06:06.881 12:04:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:06.881 12:04:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:06.881 12:04:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:06.881 12:04:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:06.881 12:04:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:06.881 12:04:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:06.881 256+0 records in 00:06:06.881 256+0 records out 00:06:06.881 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00945786 s, 111 MB/s 00:06:06.881 12:04:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:06.881 12:04:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:07.141 256+0 records in 00:06:07.141 256+0 records out 00:06:07.141 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.35711 s, 2.9 MB/s 00:06:07.141 12:04:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:07.141 12:04:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:07.712 256+0 records in 00:06:07.712 256+0 records out 00:06:07.712 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.35539 s, 3.0 MB/s 00:06:07.712 12:04:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:07.712 12:04:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:07.973 256+0 records in 00:06:07.973 256+0 records out 00:06:07.973 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.270072 s, 3.9 MB/s 00:06:07.973 12:04:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:07.973 12:04:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:08.567 256+0 records in 00:06:08.567 256+0 records out 00:06:08.567 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.771763 s, 1.4 MB/s 00:06:08.567 12:04:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:08.567 12:04:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:09.140 256+0 records in 00:06:09.140 256+0 records out 00:06:09.140 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.305864 s, 3.4 MB/s 00:06:09.140 12:04:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:09.140 12:04:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:09.140 256+0 records in 00:06:09.140 256+0 records out 00:06:09.140 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.288959 s, 3.6 MB/s 00:06:09.140 12:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:06:09.140 12:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:09.140 12:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:09.140 12:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:09.141 12:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:09.141 12:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:09.141 12:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:09.141 12:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:09.141 12:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:09.402 12:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:09.402 12:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:09.402 12:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:09.402 12:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:09.402 12:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:09.402 12:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:09.402 12:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:09.402 12:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:09.402 12:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:09.402 12:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:09.402 12:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:09.402 12:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:09.402 12:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.402 12:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:09.402 12:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:09.402 12:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:09.402 12:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:09.402 12:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:09.663 12:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:09.663 12:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:09.663 12:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:09.663 12:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:09.663 12:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:09.663 12:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:09.663 12:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:09.663 12:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:09.663 12:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:09.663 12:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:09.925 12:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:09.925 12:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:09.925 12:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:09.925 12:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:09.925 12:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:09.925 12:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:09.925 12:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:09.925 12:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:09.926 12:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:09.926 12:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:09.926 12:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:09.926 12:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:09.926 12:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:09.926 12:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:09.926 12:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:09.926 12:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:09.926 12:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:09.926 12:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:09.926 12:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:09.926 12:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:10.188 12:04:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:10.188 12:04:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:10.188 12:04:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:10.188 12:04:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:10.188 12:04:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:10.188 12:04:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:10.188 12:04:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:10.188 12:04:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:10.188 12:04:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:10.188 12:04:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:10.448 12:04:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:10.448 12:04:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:10.448 12:04:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:10.448 12:04:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:10.449 12:04:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:10.449 12:04:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:10.449 12:04:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:10.449 12:04:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:10.449 12:04:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:10.449 12:04:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:10.709 12:04:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:10.709 12:04:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:10.709 12:04:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:10.709 12:04:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:10.709 12:04:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:10.709 12:04:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:10.709 12:04:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:10.709 12:04:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:10.709 12:04:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:10.709 12:04:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.709 12:04:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:10.975 12:04:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:10.975 12:04:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:10.975 12:04:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:10.975 12:04:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:10.975 12:04:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:10.975 12:04:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:10.975 12:04:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:10.975 12:04:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:10.975 12:04:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:10.975 12:04:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:10.975 12:04:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:10.975 12:04:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:10.975 12:04:12 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:10.975 12:04:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.975 12:04:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:10.975 12:04:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:11.292 malloc_lvol_verify 00:06:11.292 12:04:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:11.554 a21615b6-3ffe-477f-a866-22038810c686 00:06:11.554 12:04:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:11.815 77679d31-8b18-4f6d-85a0-5a03bbad4d6a 00:06:11.815 12:04:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:12.075 /dev/nbd0 00:06:12.075 12:04:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:12.075 12:04:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:12.075 12:04:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:12.075 12:04:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:12.075 12:04:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:12.075 mke2fs 1.47.0 (5-Feb-2023) 00:06:12.075 Discarding device blocks: 0/4096 done 00:06:12.075 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:12.075 00:06:12.075 Allocating group tables: 0/1 done 00:06:12.075 Writing inode tables: 0/1 done 00:06:12.075 Creating journal (1024 blocks): done 00:06:12.075 Writing superblocks and filesystem accounting information: 0/1 done 00:06:12.075 00:06:12.076 12:04:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:12.076 12:04:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.076 12:04:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:12.076 12:04:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:12.076 12:04:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:12.076 12:04:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:12.076 12:04:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:12.336 12:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:12.336 12:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:12.336 12:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:12.336 12:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:12.336 12:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:12.336 12:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:12.336 12:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:12.336 12:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:12.336 12:04:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 60145 00:06:12.336 12:04:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 60145 ']' 00:06:12.336 12:04:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 60145 00:06:12.336 12:04:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:06:12.336 12:04:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:12.336 12:04:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60145 00:06:12.336 12:04:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:12.336 killing process with pid 60145 00:06:12.336 12:04:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:12.336 12:04:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60145' 00:06:12.336 12:04:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 60145 00:06:12.336 12:04:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 60145 00:06:13.281 12:04:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:13.281 00:06:13.281 real 0m12.875s 00:06:13.281 user 0m16.963s 00:06:13.281 sys 0m4.232s 00:06:13.281 12:04:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.281 12:04:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:13.281 ************************************ 00:06:13.281 END TEST bdev_nbd 00:06:13.281 ************************************ 00:06:13.281 12:04:14 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:06:13.281 12:04:14 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:06:13.281 skipping fio tests on NVMe due to multi-ns failures. 00:06:13.281 12:04:14 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:13.281 12:04:14 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:13.281 12:04:14 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:13.281 12:04:14 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:13.281 12:04:14 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.281 12:04:14 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:13.281 ************************************ 00:06:13.281 START TEST bdev_verify 00:06:13.281 ************************************ 00:06:13.281 12:04:14 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:13.281 [2024-11-25 12:04:14.330050] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:06:13.281 [2024-11-25 12:04:14.330207] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60551 ] 00:06:13.543 [2024-11-25 12:04:14.495028] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:13.805 [2024-11-25 12:04:14.637053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.805 [2024-11-25 12:04:14.637102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.377 Running I/O for 5 seconds... 00:06:16.720 17088.00 IOPS, 66.75 MiB/s [2024-11-25T12:04:18.740Z] 17024.00 IOPS, 66.50 MiB/s [2024-11-25T12:04:19.681Z] 17002.67 IOPS, 66.42 MiB/s [2024-11-25T12:04:20.621Z] 17056.00 IOPS, 66.62 MiB/s [2024-11-25T12:04:20.621Z] 17049.60 IOPS, 66.60 MiB/s 00:06:19.541 Latency(us) 00:06:19.541 [2024-11-25T12:04:20.621Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:19.541 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:19.541 Verification LBA range: start 0x0 length 0xbd0bd 00:06:19.541 Nvme0n1 : 5.10 1406.55 5.49 0.00 0.00 90791.10 20971.52 77030.01 00:06:19.541 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:19.541 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:19.541 Nvme0n1 : 5.06 1416.09 5.53 0.00 0.00 90025.39 21475.64 81466.29 00:06:19.541 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:19.541 Verification LBA range: start 0x0 length 0xa0000 00:06:19.541 Nvme1n1 : 5.10 1406.13 5.49 0.00 0.00 90702.98 22988.01 72997.02 00:06:19.541 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:19.541 Verification LBA range: start 0xa0000 length 0xa0000 00:06:19.541 Nvme1n1 : 5.09 1419.64 5.55 0.00 0.00 89240.25 13308.85 74610.22 00:06:19.541 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:19.541 Verification LBA range: start 0x0 length 0x80000 00:06:19.541 Nvme2n1 : 5.10 1405.69 5.49 0.00 0.00 90564.09 19660.80 69367.34 00:06:19.541 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:19.541 Verification LBA range: start 0x80000 length 0x80000 00:06:19.541 Nvme2n1 : 5.11 1427.46 5.58 0.00 0.00 88742.22 13611.32 77433.30 00:06:19.541 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:19.541 Verification LBA range: start 0x0 length 0x80000 00:06:19.541 Nvme2n2 : 5.10 1405.26 5.49 0.00 0.00 90530.88 18551.73 72190.42 00:06:19.541 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:19.541 Verification LBA range: start 0x80000 length 0x80000 00:06:19.541 Nvme2n2 : 5.11 1427.07 5.57 0.00 0.00 88648.27 11443.59 79853.10 00:06:19.541 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:19.541 Verification LBA range: start 0x0 length 0x80000 00:06:19.541 Nvme2n3 : 5.10 1404.38 5.49 0.00 0.00 90400.43 18551.73 73803.62 00:06:19.541 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:19.541 Verification LBA range: start 0x80000 length 0x80000 00:06:19.541 Nvme2n3 : 5.11 1426.68 5.57 0.00 0.00 88563.40 11443.59 79046.50 00:06:19.541 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:19.541 Verification LBA range: start 0x0 length 0x20000 00:06:19.541 Nvme3n1 : 5.11 1403.98 5.48 0.00 0.00 90253.18 14317.10 75820.11 00:06:19.541 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:19.541 Verification LBA range: start 0x20000 length 0x20000 00:06:19.541 Nvme3n1 : 5.12 1425.83 5.57 0.00 0.00 88514.85 13006.38 78239.90 00:06:19.541 [2024-11-25T12:04:20.621Z] =================================================================================================================== 00:06:19.541 [2024-11-25T12:04:20.622Z] Total : 16974.77 66.31 0.00 0.00 89741.02 11443.59 81466.29 00:06:20.925 00:06:20.925 real 0m7.654s 00:06:20.925 user 0m14.110s 00:06:20.925 sys 0m0.340s 00:06:20.925 12:04:21 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.925 ************************************ 00:06:20.925 END TEST bdev_verify 00:06:20.925 ************************************ 00:06:20.925 12:04:21 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:20.925 12:04:21 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:20.925 12:04:21 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:20.925 12:04:21 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.925 12:04:21 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:20.925 ************************************ 00:06:20.925 START TEST bdev_verify_big_io 00:06:20.925 ************************************ 00:06:20.925 12:04:21 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:21.267 [2024-11-25 12:04:22.054648] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:06:21.267 [2024-11-25 12:04:22.054805] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60647 ] 00:06:21.267 [2024-11-25 12:04:22.220515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:21.529 [2024-11-25 12:04:22.368930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.529 [2024-11-25 12:04:22.368939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.101 Running I/O for 5 seconds... 00:06:25.944 528.00 IOPS, 33.00 MiB/s [2024-11-25T12:04:28.960Z] 1702.00 IOPS, 106.38 MiB/s [2024-11-25T12:04:29.221Z] 2340.00 IOPS, 146.25 MiB/s 00:06:28.141 Latency(us) 00:06:28.141 [2024-11-25T12:04:29.221Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:28.141 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:28.141 Verification LBA range: start 0x0 length 0xbd0b 00:06:28.141 Nvme0n1 : 5.80 129.98 8.12 0.00 0.00 953712.07 21878.94 1071160.71 00:06:28.141 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:28.141 Verification LBA range: start 0xbd0b length 0xbd0b 00:06:28.141 Nvme0n1 : 5.78 138.33 8.65 0.00 0.00 893592.94 16636.06 1084066.26 00:06:28.141 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:28.141 Verification LBA range: start 0x0 length 0xa000 00:06:28.141 Nvme1n1 : 5.80 128.90 8.06 0.00 0.00 931674.44 61704.66 929199.66 00:06:28.141 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:28.141 Verification LBA range: start 0xa000 length 0xa000 00:06:28.141 Nvme1n1 : 5.85 135.69 8.48 0.00 0.00 879756.99 33675.42 1174405.12 00:06:28.141 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:28.141 Verification LBA range: start 0x0 length 0x8000 00:06:28.141 Nvme2n1 : 5.80 128.18 8.01 0.00 0.00 904282.97 64527.75 871124.68 00:06:28.141 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:28.141 Verification LBA range: start 0x8000 length 0x8000 00:06:28.141 Nvme2n1 : 5.85 135.14 8.45 0.00 0.00 850265.84 41741.39 974369.08 00:06:28.141 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:28.141 Verification LBA range: start 0x0 length 0x8000 00:06:28.141 Nvme2n2 : 5.81 132.25 8.27 0.00 0.00 857069.10 83482.78 935652.43 00:06:28.141 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:28.141 Verification LBA range: start 0x8000 length 0x8000 00:06:28.141 Nvme2n2 : 5.89 139.47 8.72 0.00 0.00 800353.04 49807.36 1219574.55 00:06:28.141 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:28.141 Verification LBA range: start 0x0 length 0x8000 00:06:28.141 Nvme2n3 : 5.88 141.42 8.84 0.00 0.00 779228.92 25609.45 916294.10 00:06:28.141 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:28.141 Verification LBA range: start 0x8000 length 0x8000 00:06:28.141 Nvme2n3 : 5.93 147.37 9.21 0.00 0.00 734554.39 40733.14 1238932.87 00:06:28.141 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:28.141 Verification LBA range: start 0x0 length 0x2000 00:06:28.141 Nvme3n1 : 5.94 154.28 9.64 0.00 0.00 693194.61 2747.47 942105.21 00:06:28.141 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:28.141 Verification LBA range: start 0x2000 length 0x2000 00:06:28.141 Nvme3n1 : 6.01 173.01 10.81 0.00 0.00 610359.60 1291.82 1090519.04 00:06:28.141 [2024-11-25T12:04:29.221Z] =================================================================================================================== 00:06:28.141 [2024-11-25T12:04:29.221Z] Total : 1684.02 105.25 0.00 0.00 814612.06 1291.82 1238932.87 00:06:30.051 00:06:30.051 real 0m8.749s 00:06:30.051 user 0m16.332s 00:06:30.051 sys 0m0.320s 00:06:30.051 12:04:30 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.051 12:04:30 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:06:30.051 ************************************ 00:06:30.051 END TEST bdev_verify_big_io 00:06:30.051 ************************************ 00:06:30.051 12:04:30 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:30.051 12:04:30 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:30.051 12:04:30 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.051 12:04:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:30.051 ************************************ 00:06:30.051 START TEST bdev_write_zeroes 00:06:30.051 ************************************ 00:06:30.051 12:04:30 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:30.051 [2024-11-25 12:04:30.889350] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:06:30.052 [2024-11-25 12:04:30.889547] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60762 ] 00:06:30.052 [2024-11-25 12:04:31.061770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.312 [2024-11-25 12:04:31.191263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.884 Running I/O for 1 seconds... 00:06:31.828 39168.00 IOPS, 153.00 MiB/s 00:06:31.828 Latency(us) 00:06:31.828 [2024-11-25T12:04:32.908Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:31.828 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:31.828 Nvme0n1 : 1.02 6569.70 25.66 0.00 0.00 19415.93 5873.03 36901.81 00:06:31.828 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:31.828 Nvme1n1 : 1.02 6560.55 25.63 0.00 0.00 19422.61 11998.13 36296.86 00:06:31.828 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:31.828 Nvme2n1 : 1.03 6552.82 25.60 0.00 0.00 19319.28 11544.42 33473.77 00:06:31.828 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:31.828 Nvme2n2 : 1.03 6545.15 25.57 0.00 0.00 19280.82 11594.83 32062.23 00:06:31.828 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:31.828 Nvme2n3 : 1.03 6593.69 25.76 0.00 0.00 19133.11 7360.20 32667.18 00:06:31.828 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:31.828 Nvme3n1 : 1.03 6586.02 25.73 0.00 0.00 19097.80 7813.91 31457.28 00:06:31.828 [2024-11-25T12:04:32.908Z] =================================================================================================================== 00:06:31.828 [2024-11-25T12:04:32.908Z] Total : 39407.94 153.94 0.00 0.00 19277.74 5873.03 36901.81 00:06:32.772 00:06:32.772 real 0m2.915s 00:06:32.772 user 0m2.508s 00:06:32.772 sys 0m0.278s 00:06:32.772 12:04:33 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:32.772 ************************************ 00:06:32.772 END TEST bdev_write_zeroes 00:06:32.772 ************************************ 00:06:32.772 12:04:33 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:06:32.772 12:04:33 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:32.772 12:04:33 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:32.772 12:04:33 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:32.772 12:04:33 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:32.772 ************************************ 00:06:32.772 START TEST bdev_json_nonenclosed 00:06:32.772 ************************************ 00:06:32.772 12:04:33 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:33.035 [2024-11-25 12:04:33.863976] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:06:33.035 [2024-11-25 12:04:33.864129] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60815 ] 00:06:33.035 [2024-11-25 12:04:34.029131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.296 [2024-11-25 12:04:34.169097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.296 [2024-11-25 12:04:34.169198] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:06:33.296 [2024-11-25 12:04:34.169216] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:33.296 [2024-11-25 12:04:34.169226] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:33.296 00:06:33.296 real 0m0.587s 00:06:33.296 user 0m0.359s 00:06:33.296 sys 0m0.121s 00:06:33.296 12:04:34 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:33.296 ************************************ 00:06:33.296 END TEST bdev_json_nonenclosed 00:06:33.296 ************************************ 00:06:33.296 12:04:34 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:06:33.559 12:04:34 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:33.559 12:04:34 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:33.559 12:04:34 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.559 12:04:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:33.559 ************************************ 00:06:33.559 START TEST bdev_json_nonarray 00:06:33.559 ************************************ 00:06:33.559 12:04:34 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:33.559 [2024-11-25 12:04:34.515162] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:06:33.559 [2024-11-25 12:04:34.515309] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60846 ] 00:06:33.821 [2024-11-25 12:04:34.677539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.821 [2024-11-25 12:04:34.820303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.821 [2024-11-25 12:04:34.820425] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:06:33.821 [2024-11-25 12:04:34.820445] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:33.821 [2024-11-25 12:04:34.820455] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:34.122 00:06:34.122 real 0m0.582s 00:06:34.122 user 0m0.364s 00:06:34.122 sys 0m0.112s 00:06:34.122 12:04:35 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.122 ************************************ 00:06:34.122 END TEST bdev_json_nonarray 00:06:34.122 ************************************ 00:06:34.122 12:04:35 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:06:34.122 12:04:35 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:06:34.122 12:04:35 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:06:34.122 12:04:35 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:06:34.122 12:04:35 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:06:34.122 12:04:35 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:06:34.122 12:04:35 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:06:34.122 12:04:35 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:34.122 12:04:35 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:06:34.122 12:04:35 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:06:34.122 12:04:35 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:06:34.122 12:04:35 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:06:34.122 00:06:34.122 real 0m42.184s 00:06:34.122 user 1m2.732s 00:06:34.122 sys 0m7.018s 00:06:34.122 ************************************ 00:06:34.122 END TEST blockdev_nvme 00:06:34.122 ************************************ 00:06:34.122 12:04:35 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.122 12:04:35 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:34.122 12:04:35 -- spdk/autotest.sh@209 -- # uname -s 00:06:34.122 12:04:35 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:06:34.122 12:04:35 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:34.122 12:04:35 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:34.122 12:04:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:34.122 12:04:35 -- common/autotest_common.sh@10 -- # set +x 00:06:34.122 ************************************ 00:06:34.122 START TEST blockdev_nvme_gpt 00:06:34.122 ************************************ 00:06:34.122 12:04:35 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:34.410 * Looking for test storage... 00:06:34.410 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:34.410 12:04:35 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:34.410 12:04:35 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version 00:06:34.410 12:04:35 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:34.410 12:04:35 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:34.410 12:04:35 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:34.410 12:04:35 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:34.410 12:04:35 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:34.410 12:04:35 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:06:34.410 12:04:35 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:06:34.410 12:04:35 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:06:34.410 12:04:35 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:06:34.410 12:04:35 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:06:34.410 12:04:35 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:06:34.410 12:04:35 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:06:34.410 12:04:35 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:34.410 12:04:35 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:06:34.410 12:04:35 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:06:34.410 12:04:35 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:34.410 12:04:35 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:34.410 12:04:35 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:06:34.410 12:04:35 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:06:34.410 12:04:35 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:34.410 12:04:35 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:06:34.410 12:04:35 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:06:34.410 12:04:35 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:06:34.410 12:04:35 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:06:34.410 12:04:35 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:34.410 12:04:35 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:06:34.410 12:04:35 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:06:34.410 12:04:35 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:34.410 12:04:35 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:34.410 12:04:35 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:06:34.410 12:04:35 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:34.410 12:04:35 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:34.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.410 --rc genhtml_branch_coverage=1 00:06:34.410 --rc genhtml_function_coverage=1 00:06:34.410 --rc genhtml_legend=1 00:06:34.410 --rc geninfo_all_blocks=1 00:06:34.410 --rc geninfo_unexecuted_blocks=1 00:06:34.410 00:06:34.410 ' 00:06:34.410 12:04:35 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:34.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.410 --rc genhtml_branch_coverage=1 00:06:34.410 --rc genhtml_function_coverage=1 00:06:34.410 --rc genhtml_legend=1 00:06:34.410 --rc geninfo_all_blocks=1 00:06:34.410 --rc geninfo_unexecuted_blocks=1 00:06:34.410 00:06:34.410 ' 00:06:34.410 12:04:35 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:34.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.410 --rc genhtml_branch_coverage=1 00:06:34.410 --rc genhtml_function_coverage=1 00:06:34.410 --rc genhtml_legend=1 00:06:34.410 --rc geninfo_all_blocks=1 00:06:34.410 --rc geninfo_unexecuted_blocks=1 00:06:34.410 00:06:34.410 ' 00:06:34.410 12:04:35 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:34.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.410 --rc genhtml_branch_coverage=1 00:06:34.410 --rc genhtml_function_coverage=1 00:06:34.410 --rc genhtml_legend=1 00:06:34.410 --rc geninfo_all_blocks=1 00:06:34.410 --rc geninfo_unexecuted_blocks=1 00:06:34.410 00:06:34.410 ' 00:06:34.410 12:04:35 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:34.410 12:04:35 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:06:34.410 12:04:35 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:34.410 12:04:35 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:34.410 12:04:35 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:34.410 12:04:35 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:34.410 12:04:35 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:34.410 12:04:35 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:34.410 12:04:35 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:06:34.410 12:04:35 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:34.410 12:04:35 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:34.410 12:04:35 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:34.410 12:04:35 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:06:34.410 12:04:35 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:34.410 12:04:35 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:34.410 12:04:35 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:06:34.410 12:04:35 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:34.410 12:04:35 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:06:34.410 12:04:35 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:34.410 12:04:35 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:34.410 12:04:35 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:34.410 12:04:35 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:06:34.411 12:04:35 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:06:34.411 12:04:35 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:34.411 12:04:35 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60924 00:06:34.411 12:04:35 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:34.411 12:04:35 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:34.411 12:04:35 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 60924 00:06:34.411 12:04:35 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 60924 ']' 00:06:34.411 12:04:35 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.411 12:04:35 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:34.411 12:04:35 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.411 12:04:35 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:34.411 12:04:35 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:34.411 [2024-11-25 12:04:35.413265] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:06:34.411 [2024-11-25 12:04:35.413639] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60924 ] 00:06:34.672 [2024-11-25 12:04:35.580483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.672 [2024-11-25 12:04:35.725304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.617 12:04:36 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:35.617 12:04:36 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:06:35.617 12:04:36 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:35.617 12:04:36 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:06:35.617 12:04:36 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:35.881 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:35.881 Waiting for block devices as requested 00:06:36.141 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:36.141 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:36.141 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:36.402 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:41.774 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:41.774 12:04:42 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:06:41.774 12:04:42 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:06:41.774 12:04:42 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:06:41.774 12:04:42 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf 00:06:41.774 12:04:42 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:41.774 12:04:42 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:06:41.774 12:04:42 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:06:41.774 12:04:42 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:41.774 12:04:42 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:41.774 12:04:42 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:41.774 12:04:42 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:06:41.774 12:04:42 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:06:41.774 12:04:42 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:41.774 12:04:42 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:41.774 12:04:42 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:41.774 12:04:42 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:06:41.774 12:04:42 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:06:41.774 12:04:42 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:06:41.774 12:04:42 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:41.774 12:04:42 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:41.774 12:04:42 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:06:41.774 12:04:42 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:06:41.774 12:04:42 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:06:41.774 12:04:42 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:41.774 12:04:42 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:41.774 12:04:42 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:06:41.774 12:04:42 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:06:41.774 12:04:42 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:06:41.774 12:04:42 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:41.774 12:04:42 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:41.774 12:04:42 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:06:41.774 12:04:42 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:06:41.774 12:04:42 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:06:41.775 12:04:42 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:41.775 12:04:42 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:41.775 12:04:42 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:06:41.775 12:04:42 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:06:41.775 12:04:42 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:06:41.775 12:04:42 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:41.775 12:04:42 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:06:41.775 12:04:42 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:06:41.775 12:04:42 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:06:41.775 12:04:42 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:06:41.775 12:04:42 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:06:41.775 12:04:42 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:06:41.775 12:04:42 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:06:41.775 12:04:42 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:06:41.775 BYT; 00:06:41.775 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:06:41.775 12:04:42 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:06:41.775 BYT; 00:06:41.775 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:06:41.775 12:04:42 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:06:41.775 12:04:42 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:06:41.775 12:04:42 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:06:41.775 12:04:42 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:06:41.775 12:04:42 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:06:41.775 12:04:42 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:06:41.775 12:04:42 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:06:41.775 12:04:42 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:06:41.775 12:04:42 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:41.775 12:04:42 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:41.775 12:04:42 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:06:41.775 12:04:42 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:06:41.775 12:04:42 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:41.775 12:04:42 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:06:41.775 12:04:42 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:41.775 12:04:42 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:41.775 12:04:42 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:41.775 12:04:42 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:06:41.775 12:04:42 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:06:41.775 12:04:42 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:41.775 12:04:42 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:41.775 12:04:42 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:06:41.775 12:04:42 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:06:41.775 12:04:42 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:41.775 12:04:42 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:06:41.775 12:04:42 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:41.775 12:04:42 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:41.775 12:04:42 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:41.775 12:04:42 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:06:42.717 The operation has completed successfully. 00:06:42.717 12:04:43 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:06:43.660 The operation has completed successfully. 00:06:43.660 12:04:44 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:44.230 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:44.802 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:44.802 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:44.802 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:44.802 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:45.064 12:04:45 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:06:45.064 12:04:45 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.064 12:04:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:45.064 [] 00:06:45.064 12:04:45 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.064 12:04:45 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:06:45.064 12:04:45 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:06:45.064 12:04:45 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:45.064 12:04:45 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:45.064 12:04:45 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:45.064 12:04:45 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.064 12:04:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:45.326 12:04:46 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.326 12:04:46 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:45.326 12:04:46 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.326 12:04:46 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:45.326 12:04:46 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.326 12:04:46 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:06:45.326 12:04:46 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:45.326 12:04:46 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.326 12:04:46 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:45.327 12:04:46 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.327 12:04:46 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:45.327 12:04:46 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.327 12:04:46 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:45.327 12:04:46 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.327 12:04:46 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:45.327 12:04:46 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.327 12:04:46 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:45.327 12:04:46 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.327 12:04:46 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:45.327 12:04:46 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:45.327 12:04:46 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.327 12:04:46 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:45.327 12:04:46 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:45.327 12:04:46 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.327 12:04:46 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:45.327 12:04:46 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:45.590 12:04:46 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "eceb4aa9-48ad-48ae-963c-92ea77ec871a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "eceb4aa9-48ad-48ae-963c-92ea77ec871a",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "2ef619b1-bd71-4996-b8ed-d597eac76cb1"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2ef619b1-bd71-4996-b8ed-d597eac76cb1",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "43d07cac-2351-4c9a-a183-ad10df1302f6"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "43d07cac-2351-4c9a-a183-ad10df1302f6",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "b7d2c98f-8136-4e43-8155-197e25e427c5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b7d2c98f-8136-4e43-8155-197e25e427c5",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "fd3a254d-ce7d-40d4-aea6-9693a1b9b36a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "fd3a254d-ce7d-40d4-aea6-9693a1b9b36a",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:45.590 12:04:46 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:45.590 12:04:46 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:45.590 12:04:46 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:45.590 12:04:46 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 60924 00:06:45.590 12:04:46 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 60924 ']' 00:06:45.590 12:04:46 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 60924 00:06:45.590 12:04:46 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:06:45.590 12:04:46 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:45.590 12:04:46 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60924 00:06:45.590 killing process with pid 60924 00:06:45.590 12:04:46 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:45.590 12:04:46 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:45.591 12:04:46 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60924' 00:06:45.591 12:04:46 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 60924 00:06:45.591 12:04:46 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 60924 00:06:47.518 12:04:48 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:47.518 12:04:48 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:47.518 12:04:48 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:47.518 12:04:48 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.518 12:04:48 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:47.518 ************************************ 00:06:47.518 START TEST bdev_hello_world 00:06:47.518 ************************************ 00:06:47.518 12:04:48 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:47.518 [2024-11-25 12:04:48.280043] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:06:47.518 [2024-11-25 12:04:48.280211] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61557 ] 00:06:47.518 [2024-11-25 12:04:48.447680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.518 [2024-11-25 12:04:48.592705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.461 [2024-11-25 12:04:49.202346] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:48.461 [2024-11-25 12:04:49.202432] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:48.461 [2024-11-25 12:04:49.202465] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:48.461 [2024-11-25 12:04:49.205543] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:48.461 [2024-11-25 12:04:49.206573] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:48.461 [2024-11-25 12:04:49.206620] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:48.461 [2024-11-25 12:04:49.207476] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:48.461 00:06:48.461 [2024-11-25 12:04:49.207735] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:49.035 ************************************ 00:06:49.035 END TEST bdev_hello_world 00:06:49.035 ************************************ 00:06:49.035 00:06:49.035 real 0m1.836s 00:06:49.035 user 0m1.451s 00:06:49.035 sys 0m0.267s 00:06:49.035 12:04:50 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.035 12:04:50 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:49.035 12:04:50 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:06:49.035 12:04:50 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:49.035 12:04:50 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.035 12:04:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:49.297 ************************************ 00:06:49.297 START TEST bdev_bounds 00:06:49.297 ************************************ 00:06:49.297 12:04:50 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:06:49.297 Process bdevio pid: 61599 00:06:49.297 12:04:50 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61599 00:06:49.297 12:04:50 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:49.297 12:04:50 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61599' 00:06:49.297 12:04:50 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61599 00:06:49.297 12:04:50 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61599 ']' 00:06:49.297 12:04:50 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:49.297 12:04:50 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.297 12:04:50 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:49.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.297 12:04:50 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.297 12:04:50 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:49.298 12:04:50 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:49.298 [2024-11-25 12:04:50.196426] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:06:49.298 [2024-11-25 12:04:50.196595] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61599 ] 00:06:49.298 [2024-11-25 12:04:50.363908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:49.561 [2024-11-25 12:04:50.512277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:49.561 [2024-11-25 12:04:50.512670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:49.561 [2024-11-25 12:04:50.512924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.132 12:04:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:50.132 12:04:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:06:50.132 12:04:51 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:50.392 I/O targets: 00:06:50.392 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:50.392 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:06:50.392 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:06:50.392 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:50.392 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:50.392 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:50.392 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:50.392 00:06:50.392 00:06:50.392 CUnit - A unit testing framework for C - Version 2.1-3 00:06:50.392 http://cunit.sourceforge.net/ 00:06:50.392 00:06:50.392 00:06:50.392 Suite: bdevio tests on: Nvme3n1 00:06:50.392 Test: blockdev write read block ...passed 00:06:50.392 Test: blockdev write zeroes read block ...passed 00:06:50.392 Test: blockdev write zeroes read no split ...passed 00:06:50.392 Test: blockdev write zeroes read split ...passed 00:06:50.392 Test: blockdev write zeroes read split partial ...passed 00:06:50.392 Test: blockdev reset ...[2024-11-25 12:04:51.325638] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:50.392 [2024-11-25 12:04:51.331351] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller spassed 00:06:50.392 Test: blockdev write read 8 blocks ...uccessful. 00:06:50.392 passed 00:06:50.392 Test: blockdev write read size > 128k ...passed 00:06:50.392 Test: blockdev write read invalid size ...passed 00:06:50.392 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:50.392 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:50.392 Test: blockdev write read max offset ...passed 00:06:50.392 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:50.392 Test: blockdev writev readv 8 blocks ...passed 00:06:50.392 Test: blockdev writev readv 30 x 1block ...passed 00:06:50.392 Test: blockdev writev readv block ...passed 00:06:50.392 Test: blockdev writev readv size > 128k ...passed 00:06:50.392 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:50.392 Test: blockdev comparev and writev ...[2024-11-25 12:04:51.356333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bf004000 len:0x1000 00:06:50.392 [2024-11-25 12:04:51.356425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:50.392 passed 00:06:50.392 Test: blockdev nvme passthru rw ...passed 00:06:50.392 Test: blockdev nvme passthru vendor specific ...[2024-11-25 12:04:51.359691] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:06:50.392 Test: blockdev nvme admin passthru ...RP2 0x0 00:06:50.392 [2024-11-25 12:04:51.359868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:50.392 passed 00:06:50.392 Test: blockdev copy ...passed 00:06:50.392 Suite: bdevio tests on: Nvme2n3 00:06:50.392 Test: blockdev write read block ...passed 00:06:50.392 Test: blockdev write zeroes read block ...passed 00:06:50.392 Test: blockdev write zeroes read no split ...passed 00:06:50.392 Test: blockdev write zeroes read split ...passed 00:06:50.392 Test: blockdev write zeroes read split partial ...passed 00:06:50.392 Test: blockdev reset ...[2024-11-25 12:04:51.443217] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:50.392 [2024-11-25 12:04:51.450149] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:50.392 passed 00:06:50.392 Test: blockdev write read 8 blocks ...passed 00:06:50.392 Test: blockdev write read size > 128k ...passed 00:06:50.392 Test: blockdev write read invalid size ...passed 00:06:50.392 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:50.392 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:50.392 Test: blockdev write read max offset ...passed 00:06:50.392 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:50.392 Test: blockdev writev readv 8 blocks ...passed 00:06:50.392 Test: blockdev writev readv 30 x 1block ...passed 00:06:50.392 Test: blockdev writev readv block ...passed 00:06:50.392 Test: blockdev writev readv size > 128k ...passed 00:06:50.392 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:50.392 Test: blockdev comparev and writev ...[2024-11-25 12:04:51.462524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bf002000 len:0x1000 00:06:50.392 [2024-11-25 12:04:51.462736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:50.392 passed 00:06:50.392 Test: blockdev nvme passthru rw ...passed 00:06:50.392 Test: blockdev nvme passthru vendor specific ...[2024-11-25 12:04:51.465324] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:50.392 [2024-11-25 12:04:51.465385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:50.392 passed 00:06:50.655 Test: blockdev nvme admin passthru ...passed 00:06:50.655 Test: blockdev copy ...passed 00:06:50.655 Suite: bdevio tests on: Nvme2n2 00:06:50.655 Test: blockdev write read block ...passed 00:06:50.655 Test: blockdev write zeroes read block ...passed 00:06:50.655 Test: blockdev write zeroes read no split ...passed 00:06:50.655 Test: blockdev write zeroes read split ...passed 00:06:50.655 Test: blockdev write zeroes read split partial ...passed 00:06:50.655 Test: blockdev reset ...[2024-11-25 12:04:51.542695] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:50.655 [2024-11-25 12:04:51.549198] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:50.655 passed 00:06:50.655 Test: blockdev write read 8 blocks ...passed 00:06:50.655 Test: blockdev write read size > 128k ...passed 00:06:50.655 Test: blockdev write read invalid size ...passed 00:06:50.655 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:50.655 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:50.655 Test: blockdev write read max offset ...passed 00:06:50.655 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:50.655 Test: blockdev writev readv 8 blocks ...passed 00:06:50.655 Test: blockdev writev readv 30 x 1block ...passed 00:06:50.655 Test: blockdev writev readv block ...passed 00:06:50.655 Test: blockdev writev readv size > 128k ...passed 00:06:50.655 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:50.655 Test: blockdev comparev and writev ...[2024-11-25 12:04:51.572340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2dee38000 len:0x1000 00:06:50.655 [2024-11-25 12:04:51.572447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:50.655 passed 00:06:50.655 Test: blockdev nvme passthru rw ...passed 00:06:50.655 Test: blockdev nvme passthru vendor specific ...passed 00:06:50.656 Test: blockdev nvme admin passthru ...[2024-11-25 12:04:51.575899] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:50.656 [2024-11-25 12:04:51.575970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:50.656 passed 00:06:50.656 Test: blockdev copy ...passed 00:06:50.656 Suite: bdevio tests on: Nvme2n1 00:06:50.656 Test: blockdev write read block ...passed 00:06:50.656 Test: blockdev write zeroes read block ...passed 00:06:50.656 Test: blockdev write zeroes read no split ...passed 00:06:50.656 Test: blockdev write zeroes read split ...passed 00:06:50.656 Test: blockdev write zeroes read split partial ...passed 00:06:50.656 Test: blockdev reset ...[2024-11-25 12:04:51.659120] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:50.656 [2024-11-25 12:04:51.664857] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:50.656 passed 00:06:50.656 Test: blockdev write read 8 blocks ...passed 00:06:50.656 Test: blockdev write read size > 128k ...passed 00:06:50.656 Test: blockdev write read invalid size ...passed 00:06:50.656 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:50.656 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:50.656 Test: blockdev write read max offset ...passed 00:06:50.656 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:50.656 Test: blockdev writev readv 8 blocks ...passed 00:06:50.656 Test: blockdev writev readv 30 x 1block ...passed 00:06:50.656 Test: blockdev writev readv block ...passed 00:06:50.656 Test: blockdev writev readv size > 128k ...passed 00:06:50.656 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:50.656 Test: blockdev comparev and writev ...[2024-11-25 12:04:51.690931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2dee34000 len:0x1000 00:06:50.656 [2024-11-25 12:04:51.691027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:50.656 passed 00:06:50.656 Test: blockdev nvme passthru rw ...passed 00:06:50.656 Test: blockdev nvme passthru vendor specific ...[2024-11-25 12:04:51.694509] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:06:50.656 Test: blockdev nvme admin passthru ...RP2 0x0 00:06:50.656 [2024-11-25 12:04:51.694673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:50.656 passed 00:06:50.656 Test: blockdev copy ...passed 00:06:50.656 Suite: bdevio tests on: Nvme1n1p2 00:06:50.656 Test: blockdev write read block ...passed 00:06:50.656 Test: blockdev write zeroes read block ...passed 00:06:50.656 Test: blockdev write zeroes read no split ...passed 00:06:50.930 Test: blockdev write zeroes read split ...passed 00:06:50.930 Test: blockdev write zeroes read split partial ...passed 00:06:50.930 Test: blockdev reset ...[2024-11-25 12:04:51.766084] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:50.930 [2024-11-25 12:04:51.769831] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:50.930 passed 00:06:50.930 Test: blockdev write read 8 blocks ...passed 00:06:50.930 Test: blockdev write read size > 128k ...passed 00:06:50.930 Test: blockdev write read invalid size ...passed 00:06:50.930 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:50.930 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:50.930 Test: blockdev write read max offset ...passed 00:06:50.930 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:50.930 Test: blockdev writev readv 8 blocks ...passed 00:06:50.930 Test: blockdev writev readv 30 x 1block ...passed 00:06:50.930 Test: blockdev writev readv block ...passed 00:06:50.930 Test: blockdev writev readv size > 128k ...passed 00:06:50.930 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:50.930 Test: blockdev comparev and writev ...[2024-11-25 12:04:51.792636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2dee30000 len:0x1000 00:06:50.930 [2024-11-25 12:04:51.792722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:50.930 passed 00:06:50.930 Test: blockdev nvme passthru rw ...passed 00:06:50.930 Test: blockdev nvme passthru vendor specific ...passed 00:06:50.930 Test: blockdev nvme admin passthru ...passed 00:06:50.930 Test: blockdev copy ...passed 00:06:50.930 Suite: bdevio tests on: Nvme1n1p1 00:06:50.930 Test: blockdev write read block ...passed 00:06:50.930 Test: blockdev write zeroes read block ...passed 00:06:50.930 Test: blockdev write zeroes read no split ...passed 00:06:50.930 Test: blockdev write zeroes read split ...passed 00:06:50.930 Test: blockdev write zeroes read split partial ...passed 00:06:50.930 Test: blockdev reset ...[2024-11-25 12:04:51.859400] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:50.930 [2024-11-25 12:04:51.864983] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:50.930 passed 00:06:50.930 Test: blockdev write read 8 blocks ...passed 00:06:50.930 Test: blockdev write read size > 128k ...passed 00:06:50.930 Test: blockdev write read invalid size ...passed 00:06:50.930 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:50.930 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:50.930 Test: blockdev write read max offset ...passed 00:06:50.930 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:50.930 Test: blockdev writev readv 8 blocks ...passed 00:06:50.931 Test: blockdev writev readv 30 x 1block ...passed 00:06:50.931 Test: blockdev writev readv block ...passed 00:06:50.931 Test: blockdev writev readv size > 128k ...passed 00:06:50.931 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:50.931 Test: blockdev comparev and writev ...[2024-11-25 12:04:51.886045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2bfa0e000 len:0x1000 00:06:50.931 [2024-11-25 12:04:51.886123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:50.931 passed 00:06:50.931 Test: blockdev nvme passthru rw ...passed 00:06:50.931 Test: blockdev nvme passthru vendor specific ...passed 00:06:50.931 Test: blockdev nvme admin passthru ...passed 00:06:50.931 Test: blockdev copy ...passed 00:06:50.931 Suite: bdevio tests on: Nvme0n1 00:06:50.931 Test: blockdev write read block ...passed 00:06:50.931 Test: blockdev write zeroes read block ...passed 00:06:50.931 Test: blockdev write zeroes read no split ...passed 00:06:50.931 Test: blockdev write zeroes read split ...passed 00:06:50.931 Test: blockdev write zeroes read split partial ...passed 00:06:50.931 Test: blockdev reset ...[2024-11-25 12:04:51.944294] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:50.931 [2024-11-25 12:04:51.947253] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller spassed 00:06:50.931 Test: blockdev write read 8 blocks ...uccessful. 00:06:50.931 passed 00:06:50.931 Test: blockdev write read size > 128k ...passed 00:06:50.931 Test: blockdev write read invalid size ...passed 00:06:50.931 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:50.931 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:50.931 Test: blockdev write read max offset ...passed 00:06:50.931 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:50.931 Test: blockdev writev readv 8 blocks ...passed 00:06:50.931 Test: blockdev writev readv 30 x 1block ...passed 00:06:50.931 Test: blockdev writev readv block ...passed 00:06:50.931 Test: blockdev writev readv size > 128k ...passed 00:06:50.931 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:50.931 Test: blockdev comparev and writev ...passed 00:06:50.931 Test: blockdev nvme passthru rw ...[2024-11-25 12:04:51.958528] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:50.931 separate metadata which is not supported yet. 00:06:50.931 passed 00:06:50.931 Test: blockdev nvme passthru vendor specific ...[2024-11-25 12:04:51.960274] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 Ppassed 00:06:50.931 Test: blockdev nvme admin passthru ...RP2 0x0 00:06:50.931 [2024-11-25 12:04:51.960399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:50.931 passed 00:06:50.931 Test: blockdev copy ...passed 00:06:50.931 00:06:50.931 Run Summary: Type Total Ran Passed Failed Inactive 00:06:50.931 suites 7 7 n/a 0 0 00:06:50.931 tests 161 161 161 0 0 00:06:50.931 asserts 1025 1025 1025 0 n/a 00:06:50.931 00:06:50.931 Elapsed time = 1.767 seconds 00:06:50.931 0 00:06:50.931 12:04:51 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61599 00:06:50.931 12:04:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61599 ']' 00:06:50.931 12:04:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61599 00:06:50.931 12:04:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:06:50.931 12:04:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:50.931 12:04:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61599 00:06:51.191 12:04:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:51.191 12:04:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:51.191 12:04:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61599' 00:06:51.191 killing process with pid 61599 00:06:51.191 12:04:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61599 00:06:51.191 12:04:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61599 00:06:51.761 12:04:52 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:51.761 00:06:51.761 real 0m2.579s 00:06:51.761 user 0m6.399s 00:06:51.761 sys 0m0.410s 00:06:51.761 12:04:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.761 ************************************ 00:06:51.761 END TEST bdev_bounds 00:06:51.761 ************************************ 00:06:51.761 12:04:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:51.761 12:04:52 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:51.761 12:04:52 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:51.761 12:04:52 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.761 12:04:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:51.761 ************************************ 00:06:51.761 START TEST bdev_nbd 00:06:51.761 ************************************ 00:06:51.761 12:04:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:51.761 12:04:52 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:51.761 12:04:52 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:51.761 12:04:52 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.761 12:04:52 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:51.761 12:04:52 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:51.761 12:04:52 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:51.761 12:04:52 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:06:51.761 12:04:52 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:51.761 12:04:52 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:51.761 12:04:52 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:51.761 12:04:52 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:06:51.761 12:04:52 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:51.761 12:04:52 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:51.761 12:04:52 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:51.761 12:04:52 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:51.761 12:04:52 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61653 00:06:51.761 12:04:52 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:51.761 12:04:52 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:51.761 12:04:52 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61653 /var/tmp/spdk-nbd.sock 00:06:51.761 12:04:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61653 ']' 00:06:51.761 12:04:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:51.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:51.761 12:04:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:51.761 12:04:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:51.761 12:04:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:51.761 12:04:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:52.024 [2024-11-25 12:04:52.856122] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:06:52.024 [2024-11-25 12:04:52.856435] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:52.024 [2024-11-25 12:04:53.027672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.284 [2024-11-25 12:04:53.133745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.856 12:04:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:52.856 12:04:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:06:52.856 12:04:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:52.856 12:04:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.856 12:04:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:52.856 12:04:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:52.856 12:04:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:52.856 12:04:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.856 12:04:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:52.856 12:04:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:52.856 12:04:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:52.856 12:04:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:52.856 12:04:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:52.856 12:04:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:52.856 12:04:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:53.116 12:04:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:53.116 12:04:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:53.116 12:04:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:53.116 12:04:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:53.116 12:04:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:53.116 12:04:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:53.116 12:04:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:53.116 12:04:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:53.116 12:04:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:53.116 12:04:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:53.116 12:04:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:53.116 12:04:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:53.116 1+0 records in 00:06:53.116 1+0 records out 00:06:53.116 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000823617 s, 5.0 MB/s 00:06:53.116 12:04:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:53.116 12:04:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:53.116 12:04:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:53.116 12:04:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:53.116 12:04:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:53.116 12:04:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:53.116 12:04:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:53.116 12:04:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:06:53.376 12:04:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:53.376 12:04:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:53.376 12:04:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:53.376 12:04:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:53.376 12:04:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:53.376 12:04:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:53.376 12:04:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:53.376 12:04:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:53.376 12:04:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:53.376 12:04:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:53.376 12:04:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:53.376 12:04:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:53.376 1+0 records in 00:06:53.376 1+0 records out 00:06:53.376 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000324815 s, 12.6 MB/s 00:06:53.376 12:04:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:53.376 12:04:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:53.376 12:04:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:53.376 12:04:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:53.376 12:04:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:53.376 12:04:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:53.376 12:04:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:53.376 12:04:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:06:53.376 12:04:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:53.376 12:04:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:53.376 12:04:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:53.376 12:04:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:06:53.376 12:04:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:53.376 12:04:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:53.376 12:04:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:53.376 12:04:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:06:53.376 12:04:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:53.376 12:04:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:53.376 12:04:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:53.376 12:04:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:53.376 1+0 records in 00:06:53.376 1+0 records out 00:06:53.376 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000544836 s, 7.5 MB/s 00:06:53.376 12:04:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:53.635 12:04:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:53.635 12:04:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:53.635 12:04:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:53.635 12:04:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:53.635 12:04:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:53.635 12:04:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:53.635 12:04:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:53.635 12:04:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:53.635 12:04:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:53.635 12:04:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:53.635 12:04:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:06:53.635 12:04:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:53.635 12:04:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:53.635 12:04:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:53.635 12:04:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:06:53.635 12:04:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:53.635 12:04:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:53.635 12:04:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:53.635 12:04:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:53.635 1+0 records in 00:06:53.635 1+0 records out 00:06:53.635 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00046174 s, 8.9 MB/s 00:06:53.635 12:04:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:53.635 12:04:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:53.635 12:04:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:53.635 12:04:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:53.635 12:04:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:53.635 12:04:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:53.635 12:04:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:53.635 12:04:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:53.893 12:04:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:53.894 12:04:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:53.894 12:04:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:53.894 12:04:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:06:53.894 12:04:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:53.894 12:04:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:53.894 12:04:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:53.894 12:04:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:06:53.894 12:04:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:53.894 12:04:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:53.894 12:04:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:53.894 12:04:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:53.894 1+0 records in 00:06:53.894 1+0 records out 00:06:53.894 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000841967 s, 4.9 MB/s 00:06:53.894 12:04:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:53.894 12:04:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:53.894 12:04:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:53.894 12:04:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:53.894 12:04:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:53.894 12:04:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:53.894 12:04:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:53.894 12:04:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:54.233 12:04:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:54.233 12:04:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:54.233 12:04:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:54.233 12:04:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:06:54.233 12:04:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:54.233 12:04:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:54.233 12:04:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:54.233 12:04:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:06:54.233 12:04:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:54.233 12:04:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:54.233 12:04:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:54.233 12:04:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:54.233 1+0 records in 00:06:54.233 1+0 records out 00:06:54.233 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000805265 s, 5.1 MB/s 00:06:54.233 12:04:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:54.233 12:04:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:54.233 12:04:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:54.233 12:04:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:54.233 12:04:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:54.233 12:04:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:54.233 12:04:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:54.233 12:04:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:54.496 12:04:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:06:54.496 12:04:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:06:54.496 12:04:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:06:54.496 12:04:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:06:54.496 12:04:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:54.496 12:04:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:54.496 12:04:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:54.496 12:04:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:06:54.496 12:04:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:54.496 12:04:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:54.496 12:04:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:54.496 12:04:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:54.496 1+0 records in 00:06:54.496 1+0 records out 00:06:54.496 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000965339 s, 4.2 MB/s 00:06:54.496 12:04:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:54.496 12:04:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:54.496 12:04:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:54.496 12:04:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:54.496 12:04:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:54.496 12:04:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:54.496 12:04:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:54.496 12:04:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:54.758 12:04:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:54.758 { 00:06:54.758 "nbd_device": "/dev/nbd0", 00:06:54.758 "bdev_name": "Nvme0n1" 00:06:54.758 }, 00:06:54.758 { 00:06:54.758 "nbd_device": "/dev/nbd1", 00:06:54.758 "bdev_name": "Nvme1n1p1" 00:06:54.758 }, 00:06:54.758 { 00:06:54.758 "nbd_device": "/dev/nbd2", 00:06:54.758 "bdev_name": "Nvme1n1p2" 00:06:54.758 }, 00:06:54.758 { 00:06:54.758 "nbd_device": "/dev/nbd3", 00:06:54.758 "bdev_name": "Nvme2n1" 00:06:54.758 }, 00:06:54.758 { 00:06:54.758 "nbd_device": "/dev/nbd4", 00:06:54.758 "bdev_name": "Nvme2n2" 00:06:54.758 }, 00:06:54.758 { 00:06:54.759 "nbd_device": "/dev/nbd5", 00:06:54.759 "bdev_name": "Nvme2n3" 00:06:54.759 }, 00:06:54.759 { 00:06:54.759 "nbd_device": "/dev/nbd6", 00:06:54.759 "bdev_name": "Nvme3n1" 00:06:54.759 } 00:06:54.759 ]' 00:06:54.759 12:04:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:54.759 12:04:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:54.759 12:04:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:54.759 { 00:06:54.759 "nbd_device": "/dev/nbd0", 00:06:54.759 "bdev_name": "Nvme0n1" 00:06:54.759 }, 00:06:54.759 { 00:06:54.759 "nbd_device": "/dev/nbd1", 00:06:54.759 "bdev_name": "Nvme1n1p1" 00:06:54.759 }, 00:06:54.759 { 00:06:54.759 "nbd_device": "/dev/nbd2", 00:06:54.759 "bdev_name": "Nvme1n1p2" 00:06:54.759 }, 00:06:54.759 { 00:06:54.759 "nbd_device": "/dev/nbd3", 00:06:54.759 "bdev_name": "Nvme2n1" 00:06:54.759 }, 00:06:54.759 { 00:06:54.759 "nbd_device": "/dev/nbd4", 00:06:54.759 "bdev_name": "Nvme2n2" 00:06:54.759 }, 00:06:54.759 { 00:06:54.759 "nbd_device": "/dev/nbd5", 00:06:54.759 "bdev_name": "Nvme2n3" 00:06:54.759 }, 00:06:54.759 { 00:06:54.759 "nbd_device": "/dev/nbd6", 00:06:54.759 "bdev_name": "Nvme3n1" 00:06:54.759 } 00:06:54.759 ]' 00:06:54.759 12:04:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:06:54.759 12:04:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.759 12:04:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:06:54.759 12:04:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:54.759 12:04:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:54.759 12:04:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:54.759 12:04:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:55.021 12:04:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:55.021 12:04:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:55.021 12:04:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:55.021 12:04:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:55.021 12:04:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:55.021 12:04:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:55.021 12:04:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:55.021 12:04:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:55.021 12:04:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:55.021 12:04:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:55.282 12:04:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:55.282 12:04:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:55.282 12:04:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:55.282 12:04:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:55.282 12:04:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:55.282 12:04:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:55.282 12:04:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:55.282 12:04:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:55.282 12:04:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:55.282 12:04:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:55.544 12:04:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:55.544 12:04:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:55.544 12:04:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:55.544 12:04:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:55.544 12:04:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:55.544 12:04:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:55.544 12:04:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:55.544 12:04:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:55.544 12:04:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:55.544 12:04:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:55.544 12:04:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:55.804 12:04:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:55.804 12:04:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:55.804 12:04:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:55.804 12:04:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:55.804 12:04:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:55.804 12:04:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:55.804 12:04:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:55.804 12:04:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:55.804 12:04:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:55.804 12:04:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:55.804 12:04:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:55.804 12:04:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:55.804 12:04:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:55.804 12:04:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:55.804 12:04:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:55.804 12:04:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:55.804 12:04:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:55.804 12:04:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:55.804 12:04:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:56.065 12:04:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:56.065 12:04:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:56.065 12:04:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:56.065 12:04:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:56.065 12:04:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:56.065 12:04:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:56.065 12:04:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:56.065 12:04:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:56.065 12:04:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:56.065 12:04:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:06:56.324 12:04:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:06:56.324 12:04:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:06:56.324 12:04:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:06:56.324 12:04:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:56.324 12:04:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:56.324 12:04:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:06:56.324 12:04:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:56.324 12:04:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:56.324 12:04:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:56.324 12:04:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.324 12:04:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:56.582 12:04:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:56.582 12:04:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:56.583 12:04:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:56.583 12:04:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:56.583 12:04:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:56.583 12:04:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:56.583 12:04:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:56.583 12:04:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:56.583 12:04:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:56.583 12:04:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:56.583 12:04:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:56.583 12:04:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:56.583 12:04:57 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:56.583 12:04:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.583 12:04:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:56.583 12:04:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:56.583 12:04:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:56.583 12:04:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:56.583 12:04:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:56.583 12:04:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.583 12:04:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:56.583 12:04:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:56.583 12:04:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:56.583 12:04:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:56.583 12:04:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:56.583 12:04:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:56.583 12:04:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:56.583 12:04:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:56.840 /dev/nbd0 00:06:56.840 12:04:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:56.841 12:04:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:56.841 12:04:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:56.841 12:04:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:56.841 12:04:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:56.841 12:04:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:56.841 12:04:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:56.841 12:04:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:56.841 12:04:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:56.841 12:04:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:56.841 12:04:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:56.841 1+0 records in 00:06:56.841 1+0 records out 00:06:56.841 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000638134 s, 6.4 MB/s 00:06:56.841 12:04:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:56.841 12:04:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:56.841 12:04:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:56.841 12:04:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:56.841 12:04:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:56.841 12:04:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:56.841 12:04:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:56.841 12:04:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:06:57.100 /dev/nbd1 00:06:57.100 12:04:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:57.100 12:04:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:57.100 12:04:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:57.100 12:04:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:57.100 12:04:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:57.100 12:04:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:57.100 12:04:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:57.100 12:04:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:57.100 12:04:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:57.100 12:04:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:57.100 12:04:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:57.100 1+0 records in 00:06:57.100 1+0 records out 00:06:57.100 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000820785 s, 5.0 MB/s 00:06:57.100 12:04:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:57.100 12:04:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:57.100 12:04:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:57.100 12:04:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:57.100 12:04:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:57.100 12:04:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:57.100 12:04:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:57.100 12:04:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:06:57.414 /dev/nbd10 00:06:57.414 12:04:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:57.414 12:04:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:57.414 12:04:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:06:57.414 12:04:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:57.414 12:04:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:57.414 12:04:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:57.414 12:04:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:06:57.414 12:04:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:57.414 12:04:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:57.414 12:04:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:57.414 12:04:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:57.414 1+0 records in 00:06:57.414 1+0 records out 00:06:57.414 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00118425 s, 3.5 MB/s 00:06:57.414 12:04:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:57.414 12:04:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:57.414 12:04:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:57.414 12:04:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:57.414 12:04:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:57.414 12:04:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:57.414 12:04:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:57.414 12:04:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:06:57.745 /dev/nbd11 00:06:57.745 12:04:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:57.745 12:04:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:57.745 12:04:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:06:57.745 12:04:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:57.745 12:04:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:57.745 12:04:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:57.745 12:04:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:06:57.745 12:04:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:57.745 12:04:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:57.745 12:04:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:57.745 12:04:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:57.745 1+0 records in 00:06:57.745 1+0 records out 00:06:57.745 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000557114 s, 7.4 MB/s 00:06:57.745 12:04:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:57.745 12:04:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:57.745 12:04:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:57.745 12:04:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:57.745 12:04:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:57.745 12:04:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:57.745 12:04:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:57.745 12:04:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:06:57.745 /dev/nbd12 00:06:57.745 12:04:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:57.745 12:04:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:57.745 12:04:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:06:57.745 12:04:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:57.745 12:04:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:57.745 12:04:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:57.745 12:04:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:06:57.745 12:04:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:57.745 12:04:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:57.745 12:04:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:57.745 12:04:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:57.745 1+0 records in 00:06:57.745 1+0 records out 00:06:57.745 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000927313 s, 4.4 MB/s 00:06:57.745 12:04:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:57.745 12:04:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:57.745 12:04:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:57.745 12:04:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:57.745 12:04:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:57.745 12:04:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:57.745 12:04:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:57.745 12:04:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:06:58.004 /dev/nbd13 00:06:58.004 12:04:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:58.004 12:04:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:58.004 12:04:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:06:58.004 12:04:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:58.004 12:04:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:58.004 12:04:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:58.004 12:04:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:06:58.004 12:04:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:58.004 12:04:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:58.004 12:04:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:58.004 12:04:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:58.004 1+0 records in 00:06:58.004 1+0 records out 00:06:58.004 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000772248 s, 5.3 MB/s 00:06:58.004 12:04:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:58.004 12:04:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:58.004 12:04:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:58.004 12:04:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:58.004 12:04:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:58.004 12:04:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:58.004 12:04:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:58.004 12:04:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:06:58.262 /dev/nbd14 00:06:58.262 12:04:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:06:58.262 12:04:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:06:58.262 12:04:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:06:58.262 12:04:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:58.262 12:04:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:58.262 12:04:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:58.262 12:04:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:06:58.262 12:04:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:58.262 12:04:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:58.262 12:04:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:58.262 12:04:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:58.262 1+0 records in 00:06:58.262 1+0 records out 00:06:58.262 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000690882 s, 5.9 MB/s 00:06:58.262 12:04:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:58.262 12:04:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:58.262 12:04:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:58.262 12:04:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:58.262 12:04:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:58.262 12:04:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:58.262 12:04:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:58.262 12:04:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:58.263 12:04:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.263 12:04:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:58.521 12:04:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:58.521 { 00:06:58.521 "nbd_device": "/dev/nbd0", 00:06:58.521 "bdev_name": "Nvme0n1" 00:06:58.521 }, 00:06:58.521 { 00:06:58.521 "nbd_device": "/dev/nbd1", 00:06:58.521 "bdev_name": "Nvme1n1p1" 00:06:58.521 }, 00:06:58.521 { 00:06:58.521 "nbd_device": "/dev/nbd10", 00:06:58.521 "bdev_name": "Nvme1n1p2" 00:06:58.521 }, 00:06:58.521 { 00:06:58.521 "nbd_device": "/dev/nbd11", 00:06:58.521 "bdev_name": "Nvme2n1" 00:06:58.521 }, 00:06:58.521 { 00:06:58.521 "nbd_device": "/dev/nbd12", 00:06:58.521 "bdev_name": "Nvme2n2" 00:06:58.521 }, 00:06:58.521 { 00:06:58.521 "nbd_device": "/dev/nbd13", 00:06:58.521 "bdev_name": "Nvme2n3" 00:06:58.521 }, 00:06:58.521 { 00:06:58.521 "nbd_device": "/dev/nbd14", 00:06:58.521 "bdev_name": "Nvme3n1" 00:06:58.521 } 00:06:58.521 ]' 00:06:58.521 12:04:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:58.521 12:04:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:58.521 { 00:06:58.521 "nbd_device": "/dev/nbd0", 00:06:58.521 "bdev_name": "Nvme0n1" 00:06:58.521 }, 00:06:58.521 { 00:06:58.521 "nbd_device": "/dev/nbd1", 00:06:58.521 "bdev_name": "Nvme1n1p1" 00:06:58.521 }, 00:06:58.521 { 00:06:58.521 "nbd_device": "/dev/nbd10", 00:06:58.521 "bdev_name": "Nvme1n1p2" 00:06:58.521 }, 00:06:58.521 { 00:06:58.521 "nbd_device": "/dev/nbd11", 00:06:58.521 "bdev_name": "Nvme2n1" 00:06:58.521 }, 00:06:58.521 { 00:06:58.521 "nbd_device": "/dev/nbd12", 00:06:58.521 "bdev_name": "Nvme2n2" 00:06:58.521 }, 00:06:58.521 { 00:06:58.521 "nbd_device": "/dev/nbd13", 00:06:58.521 "bdev_name": "Nvme2n3" 00:06:58.521 }, 00:06:58.521 { 00:06:58.521 "nbd_device": "/dev/nbd14", 00:06:58.521 "bdev_name": "Nvme3n1" 00:06:58.521 } 00:06:58.521 ]' 00:06:58.521 12:04:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:58.521 /dev/nbd1 00:06:58.521 /dev/nbd10 00:06:58.521 /dev/nbd11 00:06:58.521 /dev/nbd12 00:06:58.521 /dev/nbd13 00:06:58.521 /dev/nbd14' 00:06:58.521 12:04:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:58.521 12:04:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:58.521 /dev/nbd1 00:06:58.521 /dev/nbd10 00:06:58.521 /dev/nbd11 00:06:58.521 /dev/nbd12 00:06:58.521 /dev/nbd13 00:06:58.521 /dev/nbd14' 00:06:58.521 12:04:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:06:58.521 12:04:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:06:58.521 12:04:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:06:58.521 12:04:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:06:58.521 12:04:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:06:58.521 12:04:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:58.521 12:04:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:58.521 12:04:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:58.521 12:04:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:58.521 12:04:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:58.521 12:04:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:58.521 256+0 records in 00:06:58.521 256+0 records out 00:06:58.521 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00828401 s, 127 MB/s 00:06:58.521 12:04:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:58.522 12:04:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:58.784 256+0 records in 00:06:58.784 256+0 records out 00:06:58.784 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.231947 s, 4.5 MB/s 00:06:58.784 12:04:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:58.784 12:04:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:59.044 256+0 records in 00:06:59.044 256+0 records out 00:06:59.044 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.254093 s, 4.1 MB/s 00:06:59.044 12:05:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:59.044 12:05:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:59.303 256+0 records in 00:06:59.303 256+0 records out 00:06:59.303 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.238291 s, 4.4 MB/s 00:06:59.303 12:05:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:59.303 12:05:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:59.564 256+0 records in 00:06:59.564 256+0 records out 00:06:59.564 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.242868 s, 4.3 MB/s 00:06:59.564 12:05:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:59.564 12:05:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:59.826 256+0 records in 00:06:59.826 256+0 records out 00:06:59.826 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.25301 s, 4.1 MB/s 00:06:59.826 12:05:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:59.826 12:05:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:00.087 256+0 records in 00:07:00.087 256+0 records out 00:07:00.087 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.257039 s, 4.1 MB/s 00:07:00.087 12:05:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:00.087 12:05:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:07:00.350 256+0 records in 00:07:00.350 256+0 records out 00:07:00.350 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.261259 s, 4.0 MB/s 00:07:00.350 12:05:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:07:00.350 12:05:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:00.350 12:05:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:00.350 12:05:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:00.350 12:05:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:00.350 12:05:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:00.350 12:05:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:00.350 12:05:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:00.350 12:05:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:00.350 12:05:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:00.350 12:05:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:00.350 12:05:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:00.350 12:05:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:00.350 12:05:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:00.350 12:05:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:00.350 12:05:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:00.350 12:05:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:00.350 12:05:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:00.350 12:05:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:00.350 12:05:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:00.350 12:05:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:07:00.350 12:05:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:00.350 12:05:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:00.350 12:05:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.350 12:05:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:00.350 12:05:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:00.350 12:05:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:00.350 12:05:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:00.350 12:05:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:00.611 12:05:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:00.611 12:05:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:00.611 12:05:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:00.611 12:05:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:00.611 12:05:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:00.611 12:05:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:00.611 12:05:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:00.611 12:05:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:00.611 12:05:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:00.611 12:05:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:00.873 12:05:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:00.873 12:05:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:00.873 12:05:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:00.873 12:05:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:00.873 12:05:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:00.873 12:05:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:00.873 12:05:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:00.873 12:05:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:00.873 12:05:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:00.873 12:05:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:01.132 12:05:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:01.132 12:05:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:01.133 12:05:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:01.133 12:05:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:01.133 12:05:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:01.133 12:05:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:01.133 12:05:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:01.133 12:05:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:01.133 12:05:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:01.133 12:05:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:01.392 12:05:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:01.392 12:05:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:01.392 12:05:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:01.392 12:05:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:01.392 12:05:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:01.392 12:05:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:01.392 12:05:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:01.392 12:05:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:01.392 12:05:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:01.392 12:05:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:01.652 12:05:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:01.652 12:05:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:01.652 12:05:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:01.652 12:05:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:01.652 12:05:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:01.652 12:05:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:01.652 12:05:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:01.652 12:05:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:01.652 12:05:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:01.652 12:05:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:01.652 12:05:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:01.652 12:05:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:01.652 12:05:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:01.652 12:05:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:01.652 12:05:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:01.652 12:05:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:01.652 12:05:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:01.652 12:05:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:01.652 12:05:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:01.652 12:05:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:01.912 12:05:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:01.912 12:05:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:01.912 12:05:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:01.912 12:05:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:01.912 12:05:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:01.912 12:05:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:01.912 12:05:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:01.912 12:05:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:01.912 12:05:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:01.912 12:05:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.912 12:05:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:02.171 12:05:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:02.171 12:05:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:02.171 12:05:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:02.172 12:05:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:02.172 12:05:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:02.172 12:05:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:02.172 12:05:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:02.172 12:05:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:02.172 12:05:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:02.172 12:05:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:02.172 12:05:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:02.172 12:05:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:02.172 12:05:03 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:02.172 12:05:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.172 12:05:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:02.172 12:05:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:02.431 malloc_lvol_verify 00:07:02.431 12:05:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:02.691 46553a55-43ea-45b4-85f9-116fd190dc24 00:07:02.691 12:05:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:02.951 e67c6138-313e-4ee2-8b8b-20e47c2ae80e 00:07:02.951 12:05:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:03.210 /dev/nbd0 00:07:03.210 12:05:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:03.210 12:05:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:03.210 12:05:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:03.210 12:05:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:03.210 12:05:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:03.210 mke2fs 1.47.0 (5-Feb-2023) 00:07:03.210 Discarding device blocks: 0/4096 done 00:07:03.210 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:03.210 00:07:03.210 Allocating group tables: 0/1 done 00:07:03.210 Writing inode tables: 0/1 done 00:07:03.210 Creating journal (1024 blocks): done 00:07:03.210 Writing superblocks and filesystem accounting information: 0/1 done 00:07:03.210 00:07:03.210 12:05:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:03.210 12:05:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.210 12:05:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:03.210 12:05:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:03.210 12:05:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:03.210 12:05:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:03.210 12:05:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:03.470 12:05:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:03.470 12:05:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:03.470 12:05:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:03.470 12:05:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.470 12:05:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.470 12:05:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:03.470 12:05:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:03.470 12:05:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.470 12:05:04 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61653 00:07:03.470 12:05:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61653 ']' 00:07:03.470 12:05:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61653 00:07:03.470 12:05:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:07:03.470 12:05:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:03.470 12:05:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61653 00:07:03.470 killing process with pid 61653 00:07:03.470 12:05:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:03.470 12:05:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:03.470 12:05:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61653' 00:07:03.470 12:05:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61653 00:07:03.470 12:05:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61653 00:07:04.121 ************************************ 00:07:04.121 END TEST bdev_nbd 00:07:04.121 ************************************ 00:07:04.121 12:05:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:04.121 00:07:04.121 real 0m12.410s 00:07:04.121 user 0m16.883s 00:07:04.121 sys 0m4.053s 00:07:04.121 12:05:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.121 12:05:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:04.383 12:05:05 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:04.383 12:05:05 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:07:04.383 12:05:05 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:07:04.383 skipping fio tests on NVMe due to multi-ns failures. 00:07:04.383 12:05:05 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:04.383 12:05:05 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:04.383 12:05:05 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:04.383 12:05:05 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:04.383 12:05:05 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.383 12:05:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:04.383 ************************************ 00:07:04.383 START TEST bdev_verify 00:07:04.383 ************************************ 00:07:04.383 12:05:05 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:04.383 [2024-11-25 12:05:05.319758] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:07:04.383 [2024-11-25 12:05:05.319891] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62089 ] 00:07:04.645 [2024-11-25 12:05:05.480579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:04.645 [2024-11-25 12:05:05.584961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.645 [2024-11-25 12:05:05.584991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.217 Running I/O for 5 seconds... 00:07:07.545 18176.00 IOPS, 71.00 MiB/s [2024-11-25T12:05:09.564Z] 17536.00 IOPS, 68.50 MiB/s [2024-11-25T12:05:10.944Z] 17834.67 IOPS, 69.67 MiB/s [2024-11-25T12:05:11.513Z] 17632.00 IOPS, 68.88 MiB/s [2024-11-25T12:05:11.513Z] 17625.60 IOPS, 68.85 MiB/s 00:07:10.433 Latency(us) 00:07:10.433 [2024-11-25T12:05:11.513Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:10.433 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:10.433 Verification LBA range: start 0x0 length 0xbd0bd 00:07:10.433 Nvme0n1 : 5.06 1240.74 4.85 0.00 0.00 102754.49 21072.34 105664.20 00:07:10.433 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:10.433 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:10.433 Nvme0n1 : 5.08 1235.01 4.82 0.00 0.00 103390.52 21979.77 93968.54 00:07:10.433 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:10.433 Verification LBA range: start 0x0 length 0x4ff80 00:07:10.433 Nvme1n1p1 : 5.06 1240.34 4.85 0.00 0.00 102579.59 22282.24 104051.00 00:07:10.433 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:10.433 Verification LBA range: start 0x4ff80 length 0x4ff80 00:07:10.433 Nvme1n1p1 : 5.08 1234.12 4.82 0.00 0.00 103241.69 22786.36 88322.36 00:07:10.433 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:10.433 Verification LBA range: start 0x0 length 0x4ff7f 00:07:10.433 Nvme1n1p2 : 5.08 1248.14 4.88 0.00 0.00 101694.22 6906.49 100018.02 00:07:10.433 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:10.433 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:07:10.433 Nvme1n1p2 : 5.08 1233.75 4.82 0.00 0.00 102958.50 23895.43 85902.57 00:07:10.433 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:10.433 Verification LBA range: start 0x0 length 0x80000 00:07:10.433 Nvme2n1 : 5.08 1247.79 4.87 0.00 0.00 101486.22 7057.72 98001.53 00:07:10.433 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:10.433 Verification LBA range: start 0x80000 length 0x80000 00:07:10.433 Nvme2n1 : 5.09 1233.43 4.82 0.00 0.00 102752.05 22887.19 85902.57 00:07:10.433 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:10.433 Verification LBA range: start 0x0 length 0x80000 00:07:10.433 Nvme2n2 : 5.09 1257.41 4.91 0.00 0.00 100647.14 9275.86 101227.91 00:07:10.433 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:10.433 Verification LBA range: start 0x80000 length 0x80000 00:07:10.433 Nvme2n2 : 5.09 1233.08 4.82 0.00 0.00 102542.74 22080.59 88725.66 00:07:10.433 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:10.433 Verification LBA range: start 0x0 length 0x80000 00:07:10.433 Nvme2n3 : 5.09 1257.07 4.91 0.00 0.00 100508.21 9628.75 102034.51 00:07:10.433 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:10.433 Verification LBA range: start 0x80000 length 0x80000 00:07:10.433 Nvme2n3 : 5.09 1232.72 4.82 0.00 0.00 102295.83 17241.01 90742.15 00:07:10.433 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:10.433 Verification LBA range: start 0x0 length 0x20000 00:07:10.433 Nvme3n1 : 5.09 1256.73 4.91 0.00 0.00 100341.22 9427.10 106470.79 00:07:10.433 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:10.433 Verification LBA range: start 0x20000 length 0x20000 00:07:10.433 Nvme3n1 : 5.09 1243.66 4.86 0.00 0.00 101278.54 2079.51 92355.35 00:07:10.433 [2024-11-25T12:05:11.513Z] =================================================================================================================== 00:07:10.433 [2024-11-25T12:05:11.513Z] Total : 17393.98 67.95 0.00 0.00 102025.78 2079.51 106470.79 00:07:11.810 00:07:11.810 real 0m7.229s 00:07:11.810 user 0m13.474s 00:07:11.810 sys 0m0.245s 00:07:11.810 12:05:12 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:11.810 ************************************ 00:07:11.810 END TEST bdev_verify 00:07:11.810 ************************************ 00:07:11.810 12:05:12 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:11.810 12:05:12 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:11.810 12:05:12 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:11.810 12:05:12 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.810 12:05:12 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:11.810 ************************************ 00:07:11.810 START TEST bdev_verify_big_io 00:07:11.810 ************************************ 00:07:11.810 12:05:12 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:11.810 [2024-11-25 12:05:12.600103] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:07:11.810 [2024-11-25 12:05:12.600226] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62188 ] 00:07:11.810 [2024-11-25 12:05:12.763360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:11.810 [2024-11-25 12:05:12.868143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:11.810 [2024-11-25 12:05:12.868335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.752 Running I/O for 5 seconds... 00:07:17.900 1020.00 IOPS, 63.75 MiB/s [2024-11-25T12:05:19.914Z] 1830.50 IOPS, 114.41 MiB/s [2024-11-25T12:05:19.914Z] 2791.00 IOPS, 174.44 MiB/s 00:07:18.834 Latency(us) 00:07:18.834 [2024-11-25T12:05:19.914Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:18.834 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:18.834 Verification LBA range: start 0x0 length 0xbd0b 00:07:18.834 Nvme0n1 : 5.75 100.24 6.26 0.00 0.00 1224066.23 19358.33 1464780.01 00:07:18.834 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:18.834 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:18.834 Nvme0n1 : 5.93 86.68 5.42 0.00 0.00 1366717.87 26416.05 1464780.01 00:07:18.834 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:18.834 Verification LBA range: start 0x0 length 0x4ff8 00:07:18.834 Nvme1n1p1 : 5.93 103.66 6.48 0.00 0.00 1137629.55 80659.69 1264743.98 00:07:18.834 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:18.834 Verification LBA range: start 0x4ff8 length 0x4ff8 00:07:18.834 Nvme1n1p1 : 5.93 90.86 5.68 0.00 0.00 1293019.76 118569.75 1251838.42 00:07:18.834 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:18.834 Verification LBA range: start 0x0 length 0x4ff7 00:07:18.834 Nvme1n1p2 : 5.93 103.63 6.48 0.00 0.00 1095523.55 80256.39 1058255.16 00:07:18.834 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:18.834 Verification LBA range: start 0x4ff7 length 0x4ff7 00:07:18.834 Nvme1n1p2 : 6.03 95.46 5.97 0.00 0.00 1204209.30 98001.53 1161499.57 00:07:18.834 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:18.834 Verification LBA range: start 0x0 length 0x8000 00:07:18.834 Nvme2n1 : 5.94 107.80 6.74 0.00 0.00 1027119.89 100018.02 1045349.61 00:07:18.834 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:18.834 Verification LBA range: start 0x8000 length 0x8000 00:07:18.834 Nvme2n1 : 6.18 99.69 6.23 0.00 0.00 1112684.09 72190.42 1193763.45 00:07:18.834 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:18.834 Verification LBA range: start 0x0 length 0x8000 00:07:18.834 Nvme2n2 : 6.18 113.95 7.12 0.00 0.00 932561.24 46984.27 1077613.49 00:07:18.834 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:18.834 Verification LBA range: start 0x8000 length 0x8000 00:07:18.834 Nvme2n2 : 6.18 103.53 6.47 0.00 0.00 1041694.09 70980.53 1232480.10 00:07:18.834 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:18.834 Verification LBA range: start 0x0 length 0x8000 00:07:18.834 Nvme2n3 : 6.31 82.43 5.15 0.00 0.00 1267465.37 8922.98 2439149.10 00:07:18.834 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:18.834 Verification LBA range: start 0x8000 length 0x8000 00:07:18.834 Nvme2n3 : 6.24 107.03 6.69 0.00 0.00 968802.90 56865.08 1264743.98 00:07:18.834 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:18.834 Verification LBA range: start 0x0 length 0x2000 00:07:18.834 Nvme3n1 : 6.31 86.18 5.39 0.00 0.00 1163852.80 3906.95 2452054.65 00:07:18.834 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:18.834 Verification LBA range: start 0x2000 length 0x2000 00:07:18.834 Nvme3n1 : 6.29 122.10 7.63 0.00 0.00 823109.69 3554.07 1284102.30 00:07:18.834 [2024-11-25T12:05:19.914Z] =================================================================================================================== 00:07:18.834 [2024-11-25T12:05:19.914Z] Total : 1403.24 87.70 0.00 0.00 1102403.10 3554.07 2452054.65 00:07:20.735 00:07:20.735 real 0m8.776s 00:07:20.735 user 0m16.573s 00:07:20.735 sys 0m0.250s 00:07:20.735 12:05:21 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:20.735 12:05:21 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:20.735 ************************************ 00:07:20.735 END TEST bdev_verify_big_io 00:07:20.735 ************************************ 00:07:20.735 12:05:21 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:20.735 12:05:21 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:20.735 12:05:21 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:20.735 12:05:21 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:20.735 ************************************ 00:07:20.735 START TEST bdev_write_zeroes 00:07:20.735 ************************************ 00:07:20.735 12:05:21 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:20.735 [2024-11-25 12:05:21.411923] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:07:20.735 [2024-11-25 12:05:21.412071] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62297 ] 00:07:20.735 [2024-11-25 12:05:21.571898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.735 [2024-11-25 12:05:21.673507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.301 Running I/O for 1 seconds... 00:07:22.234 66755.00 IOPS, 260.76 MiB/s 00:07:22.234 Latency(us) 00:07:22.234 [2024-11-25T12:05:23.314Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:22.234 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:22.234 Nvme0n1 : 1.02 9432.30 36.84 0.00 0.00 13538.90 10233.70 39321.60 00:07:22.234 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:22.234 Nvme1n1p1 : 1.03 9479.84 37.03 0.00 0.00 13452.31 9981.64 33675.42 00:07:22.234 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:22.234 Nvme1n1p2 : 1.03 9467.89 36.98 0.00 0.00 13420.26 10183.29 34280.37 00:07:22.234 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:22.234 Nvme2n1 : 1.03 9456.89 36.94 0.00 0.00 13360.51 10183.29 28230.89 00:07:22.234 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:22.234 Nvme2n2 : 1.03 9446.03 36.90 0.00 0.00 13344.05 9023.80 28029.24 00:07:22.234 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:22.234 Nvme2n3 : 1.03 9435.28 36.86 0.00 0.00 13330.74 7965.14 28634.19 00:07:22.234 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:22.234 Nvme3n1 : 1.03 9486.41 37.06 0.00 0.00 13236.23 7965.14 25206.15 00:07:22.234 [2024-11-25T12:05:23.314Z] =================================================================================================================== 00:07:22.234 [2024-11-25T12:05:23.314Z] Total : 66204.63 258.61 0.00 0.00 13383.01 7965.14 39321.60 00:07:23.168 00:07:23.168 real 0m2.696s 00:07:23.168 user 0m2.392s 00:07:23.168 sys 0m0.185s 00:07:23.168 12:05:24 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.168 12:05:24 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:23.168 ************************************ 00:07:23.168 END TEST bdev_write_zeroes 00:07:23.168 ************************************ 00:07:23.168 12:05:24 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:23.168 12:05:24 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:23.168 12:05:24 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.168 12:05:24 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:23.168 ************************************ 00:07:23.168 START TEST bdev_json_nonenclosed 00:07:23.168 ************************************ 00:07:23.168 12:05:24 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:23.168 [2024-11-25 12:05:24.151978] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:07:23.168 [2024-11-25 12:05:24.152106] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62350 ] 00:07:23.426 [2024-11-25 12:05:24.314445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.426 [2024-11-25 12:05:24.415417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.426 [2024-11-25 12:05:24.415501] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:23.426 [2024-11-25 12:05:24.415517] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:23.426 [2024-11-25 12:05:24.415526] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:23.685 00:07:23.685 real 0m0.513s 00:07:23.685 user 0m0.298s 00:07:23.685 sys 0m0.109s 00:07:23.685 12:05:24 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.685 12:05:24 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:23.685 ************************************ 00:07:23.685 END TEST bdev_json_nonenclosed 00:07:23.685 ************************************ 00:07:23.685 12:05:24 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:23.685 12:05:24 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:23.685 12:05:24 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.685 12:05:24 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:23.685 ************************************ 00:07:23.685 START TEST bdev_json_nonarray 00:07:23.685 ************************************ 00:07:23.685 12:05:24 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:23.685 [2024-11-25 12:05:24.704289] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:07:23.685 [2024-11-25 12:05:24.704415] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62370 ] 00:07:23.943 [2024-11-25 12:05:24.866121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.943 [2024-11-25 12:05:24.966547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.943 [2024-11-25 12:05:24.966638] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:23.943 [2024-11-25 12:05:24.966654] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:23.943 [2024-11-25 12:05:24.966663] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:24.201 00:07:24.201 real 0m0.508s 00:07:24.201 user 0m0.312s 00:07:24.201 sys 0m0.091s 00:07:24.201 12:05:25 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:24.201 12:05:25 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:24.201 ************************************ 00:07:24.201 END TEST bdev_json_nonarray 00:07:24.201 ************************************ 00:07:24.201 12:05:25 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:07:24.201 12:05:25 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:07:24.201 12:05:25 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:07:24.201 12:05:25 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:24.201 12:05:25 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:24.201 12:05:25 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:24.201 ************************************ 00:07:24.201 START TEST bdev_gpt_uuid 00:07:24.201 ************************************ 00:07:24.201 12:05:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:07:24.201 12:05:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:07:24.201 12:05:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:07:24.201 12:05:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62401 00:07:24.201 12:05:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:24.201 12:05:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 62401 00:07:24.201 12:05:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 62401 ']' 00:07:24.201 12:05:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.201 12:05:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:24.201 12:05:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.201 12:05:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:24.201 12:05:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:24.201 12:05:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:24.201 [2024-11-25 12:05:25.255995] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:07:24.201 [2024-11-25 12:05:25.256094] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62401 ] 00:07:24.460 [2024-11-25 12:05:25.403251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.460 [2024-11-25 12:05:25.503180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.394 12:05:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:25.394 12:05:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:07:25.394 12:05:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:25.394 12:05:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.394 12:05:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:25.394 Some configs were skipped because the RPC state that can call them passed over. 00:07:25.394 12:05:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.394 12:05:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:07:25.394 12:05:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.394 12:05:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:25.652 12:05:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.652 12:05:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:07:25.652 12:05:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.652 12:05:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:25.652 12:05:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.652 12:05:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:07:25.652 { 00:07:25.652 "name": "Nvme1n1p1", 00:07:25.652 "aliases": [ 00:07:25.652 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:07:25.652 ], 00:07:25.652 "product_name": "GPT Disk", 00:07:25.652 "block_size": 4096, 00:07:25.652 "num_blocks": 655104, 00:07:25.652 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:25.652 "assigned_rate_limits": { 00:07:25.652 "rw_ios_per_sec": 0, 00:07:25.652 "rw_mbytes_per_sec": 0, 00:07:25.652 "r_mbytes_per_sec": 0, 00:07:25.652 "w_mbytes_per_sec": 0 00:07:25.652 }, 00:07:25.652 "claimed": false, 00:07:25.652 "zoned": false, 00:07:25.652 "supported_io_types": { 00:07:25.652 "read": true, 00:07:25.652 "write": true, 00:07:25.652 "unmap": true, 00:07:25.652 "flush": true, 00:07:25.652 "reset": true, 00:07:25.652 "nvme_admin": false, 00:07:25.652 "nvme_io": false, 00:07:25.652 "nvme_io_md": false, 00:07:25.652 "write_zeroes": true, 00:07:25.652 "zcopy": false, 00:07:25.652 "get_zone_info": false, 00:07:25.652 "zone_management": false, 00:07:25.652 "zone_append": false, 00:07:25.652 "compare": true, 00:07:25.652 "compare_and_write": false, 00:07:25.652 "abort": true, 00:07:25.652 "seek_hole": false, 00:07:25.652 "seek_data": false, 00:07:25.652 "copy": true, 00:07:25.652 "nvme_iov_md": false 00:07:25.652 }, 00:07:25.652 "driver_specific": { 00:07:25.652 "gpt": { 00:07:25.652 "base_bdev": "Nvme1n1", 00:07:25.652 "offset_blocks": 256, 00:07:25.652 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:07:25.652 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:25.652 "partition_name": "SPDK_TEST_first" 00:07:25.652 } 00:07:25.652 } 00:07:25.652 } 00:07:25.652 ]' 00:07:25.652 12:05:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:07:25.653 12:05:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:07:25.653 12:05:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:07:25.653 12:05:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:25.653 12:05:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:25.653 12:05:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:25.653 12:05:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:25.653 12:05:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.653 12:05:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:25.653 12:05:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.653 12:05:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:07:25.653 { 00:07:25.653 "name": "Nvme1n1p2", 00:07:25.653 "aliases": [ 00:07:25.653 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:07:25.653 ], 00:07:25.653 "product_name": "GPT Disk", 00:07:25.653 "block_size": 4096, 00:07:25.653 "num_blocks": 655103, 00:07:25.653 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:25.653 "assigned_rate_limits": { 00:07:25.653 "rw_ios_per_sec": 0, 00:07:25.653 "rw_mbytes_per_sec": 0, 00:07:25.653 "r_mbytes_per_sec": 0, 00:07:25.653 "w_mbytes_per_sec": 0 00:07:25.653 }, 00:07:25.653 "claimed": false, 00:07:25.653 "zoned": false, 00:07:25.653 "supported_io_types": { 00:07:25.653 "read": true, 00:07:25.653 "write": true, 00:07:25.653 "unmap": true, 00:07:25.653 "flush": true, 00:07:25.653 "reset": true, 00:07:25.653 "nvme_admin": false, 00:07:25.653 "nvme_io": false, 00:07:25.653 "nvme_io_md": false, 00:07:25.653 "write_zeroes": true, 00:07:25.653 "zcopy": false, 00:07:25.653 "get_zone_info": false, 00:07:25.653 "zone_management": false, 00:07:25.653 "zone_append": false, 00:07:25.653 "compare": true, 00:07:25.653 "compare_and_write": false, 00:07:25.653 "abort": true, 00:07:25.653 "seek_hole": false, 00:07:25.653 "seek_data": false, 00:07:25.653 "copy": true, 00:07:25.653 "nvme_iov_md": false 00:07:25.653 }, 00:07:25.653 "driver_specific": { 00:07:25.653 "gpt": { 00:07:25.653 "base_bdev": "Nvme1n1", 00:07:25.653 "offset_blocks": 655360, 00:07:25.653 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:07:25.653 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:25.653 "partition_name": "SPDK_TEST_second" 00:07:25.653 } 00:07:25.653 } 00:07:25.653 } 00:07:25.653 ]' 00:07:25.653 12:05:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:07:25.653 12:05:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:07:25.653 12:05:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:07:25.653 12:05:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:25.653 12:05:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:25.653 12:05:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:25.653 12:05:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 62401 00:07:25.653 12:05:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 62401 ']' 00:07:25.653 12:05:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 62401 00:07:25.653 12:05:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:07:25.653 12:05:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:25.653 12:05:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62401 00:07:25.653 12:05:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:25.653 12:05:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:25.653 killing process with pid 62401 00:07:25.653 12:05:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62401' 00:07:25.653 12:05:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 62401 00:07:25.653 12:05:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 62401 00:07:27.552 00:07:27.552 real 0m2.962s 00:07:27.552 user 0m3.126s 00:07:27.552 sys 0m0.381s 00:07:27.552 12:05:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.552 12:05:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:27.552 ************************************ 00:07:27.552 END TEST bdev_gpt_uuid 00:07:27.552 ************************************ 00:07:27.552 12:05:28 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:07:27.552 12:05:28 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:27.552 12:05:28 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:07:27.552 12:05:28 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:27.552 12:05:28 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:27.552 12:05:28 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:07:27.552 12:05:28 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:07:27.552 12:05:28 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:07:27.552 12:05:28 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:27.552 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:27.552 Waiting for block devices as requested 00:07:27.810 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:27.810 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:27.810 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:27.810 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:33.076 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:33.076 12:05:33 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:07:33.076 12:05:33 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:07:33.335 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:33.335 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:33.335 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:33.335 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:33.335 12:05:34 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:07:33.335 00:07:33.335 real 0m59.002s 00:07:33.335 user 1m14.807s 00:07:33.335 sys 0m8.826s 00:07:33.335 12:05:34 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:33.335 12:05:34 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:33.335 ************************************ 00:07:33.335 END TEST blockdev_nvme_gpt 00:07:33.335 ************************************ 00:07:33.335 12:05:34 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:33.335 12:05:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:33.335 12:05:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:33.335 12:05:34 -- common/autotest_common.sh@10 -- # set +x 00:07:33.335 ************************************ 00:07:33.335 START TEST nvme 00:07:33.335 ************************************ 00:07:33.335 12:05:34 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:33.335 * Looking for test storage... 00:07:33.335 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:07:33.335 12:05:34 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:33.335 12:05:34 nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:07:33.335 12:05:34 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:33.335 12:05:34 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:33.335 12:05:34 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:33.335 12:05:34 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:33.335 12:05:34 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:33.335 12:05:34 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:33.335 12:05:34 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:33.335 12:05:34 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:33.335 12:05:34 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:33.335 12:05:34 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:33.335 12:05:34 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:33.335 12:05:34 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:33.335 12:05:34 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:33.335 12:05:34 nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:33.335 12:05:34 nvme -- scripts/common.sh@345 -- # : 1 00:07:33.335 12:05:34 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:33.336 12:05:34 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:33.336 12:05:34 nvme -- scripts/common.sh@365 -- # decimal 1 00:07:33.336 12:05:34 nvme -- scripts/common.sh@353 -- # local d=1 00:07:33.336 12:05:34 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:33.336 12:05:34 nvme -- scripts/common.sh@355 -- # echo 1 00:07:33.336 12:05:34 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:33.336 12:05:34 nvme -- scripts/common.sh@366 -- # decimal 2 00:07:33.336 12:05:34 nvme -- scripts/common.sh@353 -- # local d=2 00:07:33.336 12:05:34 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:33.336 12:05:34 nvme -- scripts/common.sh@355 -- # echo 2 00:07:33.336 12:05:34 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:33.336 12:05:34 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:33.336 12:05:34 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:33.336 12:05:34 nvme -- scripts/common.sh@368 -- # return 0 00:07:33.336 12:05:34 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:33.336 12:05:34 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:33.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.336 --rc genhtml_branch_coverage=1 00:07:33.336 --rc genhtml_function_coverage=1 00:07:33.336 --rc genhtml_legend=1 00:07:33.336 --rc geninfo_all_blocks=1 00:07:33.336 --rc geninfo_unexecuted_blocks=1 00:07:33.336 00:07:33.336 ' 00:07:33.336 12:05:34 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:33.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.336 --rc genhtml_branch_coverage=1 00:07:33.336 --rc genhtml_function_coverage=1 00:07:33.336 --rc genhtml_legend=1 00:07:33.336 --rc geninfo_all_blocks=1 00:07:33.336 --rc geninfo_unexecuted_blocks=1 00:07:33.336 00:07:33.336 ' 00:07:33.336 12:05:34 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:33.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.336 --rc genhtml_branch_coverage=1 00:07:33.336 --rc genhtml_function_coverage=1 00:07:33.336 --rc genhtml_legend=1 00:07:33.336 --rc geninfo_all_blocks=1 00:07:33.336 --rc geninfo_unexecuted_blocks=1 00:07:33.336 00:07:33.336 ' 00:07:33.336 12:05:34 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:33.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.336 --rc genhtml_branch_coverage=1 00:07:33.336 --rc genhtml_function_coverage=1 00:07:33.336 --rc genhtml_legend=1 00:07:33.336 --rc geninfo_all_blocks=1 00:07:33.336 --rc geninfo_unexecuted_blocks=1 00:07:33.336 00:07:33.336 ' 00:07:33.336 12:05:34 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:33.902 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:34.159 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:34.159 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:34.159 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:34.417 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:34.417 12:05:35 nvme -- nvme/nvme.sh@79 -- # uname 00:07:34.417 12:05:35 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:07:34.417 12:05:35 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:07:34.417 12:05:35 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:07:34.417 12:05:35 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:07:34.417 12:05:35 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:07:34.417 12:05:35 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:07:34.417 12:05:35 nvme -- common/autotest_common.sh@1075 -- # stubpid=63036 00:07:34.417 12:05:35 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:07:34.417 Waiting for stub to ready for secondary processes... 00:07:34.417 12:05:35 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:07:34.417 12:05:35 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:34.417 12:05:35 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/63036 ]] 00:07:34.418 12:05:35 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:07:34.418 [2024-11-25 12:05:35.363652] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:07:34.418 [2024-11-25 12:05:35.363783] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:07:35.352 [2024-11-25 12:05:36.146502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:35.352 [2024-11-25 12:05:36.242383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:35.352 [2024-11-25 12:05:36.242817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:35.352 [2024-11-25 12:05:36.242844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:35.352 [2024-11-25 12:05:36.256430] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:07:35.352 [2024-11-25 12:05:36.256482] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:35.352 [2024-11-25 12:05:36.263804] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:07:35.352 [2024-11-25 12:05:36.263888] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:07:35.352 [2024-11-25 12:05:36.265399] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:35.352 [2024-11-25 12:05:36.265541] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:07:35.352 [2024-11-25 12:05:36.265590] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:07:35.352 [2024-11-25 12:05:36.267096] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:35.352 [2024-11-25 12:05:36.267225] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:07:35.352 [2024-11-25 12:05:36.267264] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:07:35.352 [2024-11-25 12:05:36.269007] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:35.352 [2024-11-25 12:05:36.269125] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:07:35.352 [2024-11-25 12:05:36.269172] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:07:35.352 [2024-11-25 12:05:36.269203] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:07:35.352 [2024-11-25 12:05:36.269229] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:07:35.352 12:05:36 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:35.352 12:05:36 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:07:35.352 done. 00:07:35.352 12:05:36 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:35.352 12:05:36 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:07:35.352 12:05:36 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.352 12:05:36 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:35.352 ************************************ 00:07:35.352 START TEST nvme_reset 00:07:35.352 ************************************ 00:07:35.352 12:05:36 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:35.611 Initializing NVMe Controllers 00:07:35.611 Skipping QEMU NVMe SSD at 0000:00:10.0 00:07:35.611 Skipping QEMU NVMe SSD at 0000:00:11.0 00:07:35.611 Skipping QEMU NVMe SSD at 0000:00:13.0 00:07:35.611 Skipping QEMU NVMe SSD at 0000:00:12.0 00:07:35.611 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:07:35.611 00:07:35.611 real 0m0.239s 00:07:35.611 user 0m0.093s 00:07:35.611 sys 0m0.100s 00:07:35.611 12:05:36 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.611 12:05:36 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:07:35.611 ************************************ 00:07:35.611 END TEST nvme_reset 00:07:35.611 ************************************ 00:07:35.611 12:05:36 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:07:35.611 12:05:36 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:35.611 12:05:36 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.611 12:05:36 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:35.611 ************************************ 00:07:35.611 START TEST nvme_identify 00:07:35.611 ************************************ 00:07:35.611 12:05:36 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:07:35.611 12:05:36 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:07:35.611 12:05:36 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:07:35.611 12:05:36 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:07:35.611 12:05:36 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:07:35.611 12:05:36 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:35.611 12:05:36 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:07:35.611 12:05:36 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:35.611 12:05:36 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:35.611 12:05:36 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:35.611 12:05:36 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:35.611 12:05:36 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:35.611 12:05:36 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:07:35.927 [2024-11-25 12:05:36.851585] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 63056 terminated unexpected 00:07:35.927 ===================================================== 00:07:35.927 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:35.927 ===================================================== 00:07:35.927 Controller Capabilities/Features 00:07:35.927 ================================ 00:07:35.927 Vendor ID: 1b36 00:07:35.927 Subsystem Vendor ID: 1af4 00:07:35.927 Serial Number: 12340 00:07:35.927 Model Number: QEMU NVMe Ctrl 00:07:35.927 Firmware Version: 8.0.0 00:07:35.927 Recommended Arb Burst: 6 00:07:35.927 IEEE OUI Identifier: 00 54 52 00:07:35.927 Multi-path I/O 00:07:35.927 May have multiple subsystem ports: No 00:07:35.927 May have multiple controllers: No 00:07:35.927 Associated with SR-IOV VF: No 00:07:35.927 Max Data Transfer Size: 524288 00:07:35.927 Max Number of Namespaces: 256 00:07:35.927 Max Number of I/O Queues: 64 00:07:35.927 NVMe Specification Version (VS): 1.4 00:07:35.927 NVMe Specification Version (Identify): 1.4 00:07:35.927 Maximum Queue Entries: 2048 00:07:35.927 Contiguous Queues Required: Yes 00:07:35.927 Arbitration Mechanisms Supported 00:07:35.927 Weighted Round Robin: Not Supported 00:07:35.927 Vendor Specific: Not Supported 00:07:35.927 Reset Timeout: 7500 ms 00:07:35.927 Doorbell Stride: 4 bytes 00:07:35.927 NVM Subsystem Reset: Not Supported 00:07:35.927 Command Sets Supported 00:07:35.927 NVM Command Set: Supported 00:07:35.927 Boot Partition: Not Supported 00:07:35.927 Memory Page Size Minimum: 4096 bytes 00:07:35.927 Memory Page Size Maximum: 65536 bytes 00:07:35.927 Persistent Memory Region: Not Supported 00:07:35.927 Optional Asynchronous Events Supported 00:07:35.927 Namespace Attribute Notices: Supported 00:07:35.927 Firmware Activation Notices: Not Supported 00:07:35.927 ANA Change Notices: Not Supported 00:07:35.927 PLE Aggregate Log Change Notices: Not Supported 00:07:35.927 LBA Status Info Alert Notices: Not Supported 00:07:35.927 EGE Aggregate Log Change Notices: Not Supported 00:07:35.927 Normal NVM Subsystem Shutdown event: Not Supported 00:07:35.927 Zone Descriptor Change Notices: Not Supported 00:07:35.927 Discovery Log Change Notices: Not Supported 00:07:35.927 Controller Attributes 00:07:35.927 128-bit Host Identifier: Not Supported 00:07:35.927 Non-Operational Permissive Mode: Not Supported 00:07:35.927 NVM Sets: Not Supported 00:07:35.927 Read Recovery Levels: Not Supported 00:07:35.927 Endurance Groups: Not Supported 00:07:35.927 Predictable Latency Mode: Not Supported 00:07:35.927 Traffic Based Keep ALive: Not Supported 00:07:35.927 Namespace Granularity: Not Supported 00:07:35.927 SQ Associations: Not Supported 00:07:35.927 UUID List: Not Supported 00:07:35.927 Multi-Domain Subsystem: Not Supported 00:07:35.927 Fixed Capacity Management: Not Supported 00:07:35.927 Variable Capacity Management: Not Supported 00:07:35.927 Delete Endurance Group: Not Supported 00:07:35.927 Delete NVM Set: Not Supported 00:07:35.927 Extended LBA Formats Supported: Supported 00:07:35.927 Flexible Data Placement Supported: Not Supported 00:07:35.927 00:07:35.927 Controller Memory Buffer Support 00:07:35.927 ================================ 00:07:35.927 Supported: No 00:07:35.927 00:07:35.927 Persistent Memory Region Support 00:07:35.927 ================================ 00:07:35.927 Supported: No 00:07:35.927 00:07:35.927 Admin Command Set Attributes 00:07:35.927 ============================ 00:07:35.927 Security Send/Receive: Not Supported 00:07:35.927 Format NVM: Supported 00:07:35.927 Firmware Activate/Download: Not Supported 00:07:35.927 Namespace Management: Supported 00:07:35.927 Device Self-Test: Not Supported 00:07:35.927 Directives: Supported 00:07:35.927 NVMe-MI: Not Supported 00:07:35.927 Virtualization Management: Not Supported 00:07:35.927 Doorbell Buffer Config: Supported 00:07:35.927 Get LBA Status Capability: Not Supported 00:07:35.927 Command & Feature Lockdown Capability: Not Supported 00:07:35.927 Abort Command Limit: 4 00:07:35.927 Async Event Request Limit: 4 00:07:35.927 Number of Firmware Slots: N/A 00:07:35.927 Firmware Slot 1 Read-Only: N/A 00:07:35.927 Firmware Activation Without Reset: N/A 00:07:35.927 Multiple Update Detection Support: N/A 00:07:35.927 Firmware Update Granularity: No Information Provided 00:07:35.927 Per-Namespace SMART Log: Yes 00:07:35.927 Asymmetric Namespace Access Log Page: Not Supported 00:07:35.927 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:35.927 Command Effects Log Page: Supported 00:07:35.927 Get Log Page Extended Data: Supported 00:07:35.927 Telemetry Log Pages: Not Supported 00:07:35.927 Persistent Event Log Pages: Not Supported 00:07:35.927 Supported Log Pages Log Page: May Support 00:07:35.927 Commands Supported & Effects Log Page: Not Supported 00:07:35.927 Feature Identifiers & Effects Log Page:May Support 00:07:35.927 NVMe-MI Commands & Effects Log Page: May Support 00:07:35.927 Data Area 4 for Telemetry Log: Not Supported 00:07:35.927 Error Log Page Entries Supported: 1 00:07:35.927 Keep Alive: Not Supported 00:07:35.927 00:07:35.927 NVM Command Set Attributes 00:07:35.927 ========================== 00:07:35.927 Submission Queue Entry Size 00:07:35.927 Max: 64 00:07:35.927 Min: 64 00:07:35.927 Completion Queue Entry Size 00:07:35.927 Max: 16 00:07:35.927 Min: 16 00:07:35.927 Number of Namespaces: 256 00:07:35.927 Compare Command: Supported 00:07:35.927 Write Uncorrectable Command: Not Supported 00:07:35.927 Dataset Management Command: Supported 00:07:35.927 Write Zeroes Command: Supported 00:07:35.927 Set Features Save Field: Supported 00:07:35.927 Reservations: Not Supported 00:07:35.927 Timestamp: Supported 00:07:35.927 Copy: Supported 00:07:35.927 Volatile Write Cache: Present 00:07:35.927 Atomic Write Unit (Normal): 1 00:07:35.927 Atomic Write Unit (PFail): 1 00:07:35.927 Atomic Compare & Write Unit: 1 00:07:35.927 Fused Compare & Write: Not Supported 00:07:35.927 Scatter-Gather List 00:07:35.927 SGL Command Set: Supported 00:07:35.927 SGL Keyed: Not Supported 00:07:35.927 SGL Bit Bucket Descriptor: Not Supported 00:07:35.927 SGL Metadata Pointer: Not Supported 00:07:35.927 Oversized SGL: Not Supported 00:07:35.927 SGL Metadata Address: Not Supported 00:07:35.927 SGL Offset: Not Supported 00:07:35.927 Transport SGL Data Block: Not Supported 00:07:35.927 Replay Protected Memory Block: Not Supported 00:07:35.927 00:07:35.927 Firmware Slot Information 00:07:35.927 ========================= 00:07:35.927 Active slot: 1 00:07:35.928 Slot 1 Firmware Revision: 1.0 00:07:35.928 00:07:35.928 00:07:35.928 Commands Supported and Effects 00:07:35.928 ============================== 00:07:35.928 Admin Commands 00:07:35.928 -------------- 00:07:35.928 Delete I/O Submission Queue (00h): Supported 00:07:35.928 Create I/O Submission Queue (01h): Supported 00:07:35.928 Get Log Page (02h): Supported 00:07:35.928 Delete I/O Completion Queue (04h): Supported 00:07:35.928 Create I/O Completion Queue (05h): Supported 00:07:35.928 Identify (06h): Supported 00:07:35.928 Abort (08h): Supported 00:07:35.928 Set Features (09h): Supported 00:07:35.928 Get Features (0Ah): Supported 00:07:35.928 Asynchronous Event Request (0Ch): Supported 00:07:35.928 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:35.928 Directive Send (19h): Supported 00:07:35.928 Directive Receive (1Ah): Supported 00:07:35.928 Virtualization Management (1Ch): Supported 00:07:35.928 Doorbell Buffer Config (7Ch): Supported 00:07:35.928 Format NVM (80h): Supported LBA-Change 00:07:35.928 I/O Commands 00:07:35.928 ------------ 00:07:35.928 Flush (00h): Supported LBA-Change 00:07:35.928 Write (01h): Supported LBA-Change 00:07:35.928 Read (02h): Supported 00:07:35.928 Compare (05h): Supported 00:07:35.928 Write Zeroes (08h): Supported LBA-Change 00:07:35.928 Dataset Management (09h): Supported LBA-Change 00:07:35.928 Unknown (0Ch): Supported 00:07:35.928 Unknown (12h): Supported 00:07:35.928 Copy (19h): Supported LBA-Change 00:07:35.928 Unknown (1Dh): Supported LBA-Change 00:07:35.928 00:07:35.928 Error Log 00:07:35.928 ========= 00:07:35.928 00:07:35.928 Arbitration 00:07:35.928 =========== 00:07:35.928 Arbitration Burst: no limit 00:07:35.928 00:07:35.928 Power Management 00:07:35.928 ================ 00:07:35.928 Number of Power States: 1 00:07:35.928 Current Power State: Power State #0 00:07:35.928 Power State #0: 00:07:35.928 Max Power: 25.00 W 00:07:35.928 Non-Operational State: Operational 00:07:35.928 Entry Latency: 16 microseconds 00:07:35.928 Exit Latency: 4 microseconds 00:07:35.928 Relative Read Throughput: 0 00:07:35.928 Relative Read Latency: 0 00:07:35.928 Relative Write Throughput: 0 00:07:35.928 Relative Write Latency: 0 00:07:35.928 Idle Power[2024-11-25 12:05:36.852647] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 63056 terminated unexpected 00:07:35.928 : Not Reported 00:07:35.928 Active Power: Not Reported 00:07:35.928 Non-Operational Permissive Mode: Not Supported 00:07:35.928 00:07:35.928 Health Information 00:07:35.928 ================== 00:07:35.928 Critical Warnings: 00:07:35.928 Available Spare Space: OK 00:07:35.928 Temperature: OK 00:07:35.928 Device Reliability: OK 00:07:35.928 Read Only: No 00:07:35.928 Volatile Memory Backup: OK 00:07:35.928 Current Temperature: 323 Kelvin (50 Celsius) 00:07:35.928 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:35.928 Available Spare: 0% 00:07:35.928 Available Spare Threshold: 0% 00:07:35.928 Life Percentage Used: 0% 00:07:35.928 Data Units Read: 635 00:07:35.928 Data Units Written: 563 00:07:35.928 Host Read Commands: 30536 00:07:35.928 Host Write Commands: 30322 00:07:35.928 Controller Busy Time: 0 minutes 00:07:35.928 Power Cycles: 0 00:07:35.928 Power On Hours: 0 hours 00:07:35.928 Unsafe Shutdowns: 0 00:07:35.928 Unrecoverable Media Errors: 0 00:07:35.928 Lifetime Error Log Entries: 0 00:07:35.928 Warning Temperature Time: 0 minutes 00:07:35.928 Critical Temperature Time: 0 minutes 00:07:35.928 00:07:35.928 Number of Queues 00:07:35.928 ================ 00:07:35.928 Number of I/O Submission Queues: 64 00:07:35.928 Number of I/O Completion Queues: 64 00:07:35.928 00:07:35.928 ZNS Specific Controller Data 00:07:35.928 ============================ 00:07:35.928 Zone Append Size Limit: 0 00:07:35.928 00:07:35.928 00:07:35.928 Active Namespaces 00:07:35.928 ================= 00:07:35.928 Namespace ID:1 00:07:35.928 Error Recovery Timeout: Unlimited 00:07:35.928 Command Set Identifier: NVM (00h) 00:07:35.928 Deallocate: Supported 00:07:35.928 Deallocated/Unwritten Error: Supported 00:07:35.928 Deallocated Read Value: All 0x00 00:07:35.928 Deallocate in Write Zeroes: Not Supported 00:07:35.928 Deallocated Guard Field: 0xFFFF 00:07:35.928 Flush: Supported 00:07:35.928 Reservation: Not Supported 00:07:35.928 Metadata Transferred as: Separate Metadata Buffer 00:07:35.928 Namespace Sharing Capabilities: Private 00:07:35.928 Size (in LBAs): 1548666 (5GiB) 00:07:35.928 Capacity (in LBAs): 1548666 (5GiB) 00:07:35.928 Utilization (in LBAs): 1548666 (5GiB) 00:07:35.928 Thin Provisioning: Not Supported 00:07:35.928 Per-NS Atomic Units: No 00:07:35.928 Maximum Single Source Range Length: 128 00:07:35.928 Maximum Copy Length: 128 00:07:35.928 Maximum Source Range Count: 128 00:07:35.928 NGUID/EUI64 Never Reused: No 00:07:35.928 Namespace Write Protected: No 00:07:35.928 Number of LBA Formats: 8 00:07:35.928 Current LBA Format: LBA Format #07 00:07:35.928 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:35.928 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:35.928 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:35.928 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:35.928 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:35.928 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:35.928 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:35.928 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:35.928 00:07:35.928 NVM Specific Namespace Data 00:07:35.928 =========================== 00:07:35.928 Logical Block Storage Tag Mask: 0 00:07:35.928 Protection Information Capabilities: 00:07:35.928 16b Guard Protection Information Storage Tag Support: No 00:07:35.928 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:35.928 Storage Tag Check Read Support: No 00:07:35.928 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.928 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.928 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.928 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.928 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.928 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.928 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.928 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.928 ===================================================== 00:07:35.928 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:35.928 ===================================================== 00:07:35.928 Controller Capabilities/Features 00:07:35.928 ================================ 00:07:35.928 Vendor ID: 1b36 00:07:35.928 Subsystem Vendor ID: 1af4 00:07:35.928 Serial Number: 12341 00:07:35.928 Model Number: QEMU NVMe Ctrl 00:07:35.928 Firmware Version: 8.0.0 00:07:35.928 Recommended Arb Burst: 6 00:07:35.928 IEEE OUI Identifier: 00 54 52 00:07:35.928 Multi-path I/O 00:07:35.928 May have multiple subsystem ports: No 00:07:35.928 May have multiple controllers: No 00:07:35.928 Associated with SR-IOV VF: No 00:07:35.928 Max Data Transfer Size: 524288 00:07:35.928 Max Number of Namespaces: 256 00:07:35.928 Max Number of I/O Queues: 64 00:07:35.928 NVMe Specification Version (VS): 1.4 00:07:35.928 NVMe Specification Version (Identify): 1.4 00:07:35.928 Maximum Queue Entries: 2048 00:07:35.928 Contiguous Queues Required: Yes 00:07:35.928 Arbitration Mechanisms Supported 00:07:35.928 Weighted Round Robin: Not Supported 00:07:35.928 Vendor Specific: Not Supported 00:07:35.928 Reset Timeout: 7500 ms 00:07:35.928 Doorbell Stride: 4 bytes 00:07:35.928 NVM Subsystem Reset: Not Supported 00:07:35.928 Command Sets Supported 00:07:35.928 NVM Command Set: Supported 00:07:35.928 Boot Partition: Not Supported 00:07:35.928 Memory Page Size Minimum: 4096 bytes 00:07:35.928 Memory Page Size Maximum: 65536 bytes 00:07:35.928 Persistent Memory Region: Not Supported 00:07:35.928 Optional Asynchronous Events Supported 00:07:35.928 Namespace Attribute Notices: Supported 00:07:35.928 Firmware Activation Notices: Not Supported 00:07:35.928 ANA Change Notices: Not Supported 00:07:35.928 PLE Aggregate Log Change Notices: Not Supported 00:07:35.928 LBA Status Info Alert Notices: Not Supported 00:07:35.928 EGE Aggregate Log Change Notices: Not Supported 00:07:35.928 Normal NVM Subsystem Shutdown event: Not Supported 00:07:35.928 Zone Descriptor Change Notices: Not Supported 00:07:35.928 Discovery Log Change Notices: Not Supported 00:07:35.928 Controller Attributes 00:07:35.928 128-bit Host Identifier: Not Supported 00:07:35.928 Non-Operational Permissive Mode: Not Supported 00:07:35.928 NVM Sets: Not Supported 00:07:35.929 Read Recovery Levels: Not Supported 00:07:35.929 Endurance Groups: Not Supported 00:07:35.929 Predictable Latency Mode: Not Supported 00:07:35.929 Traffic Based Keep ALive: Not Supported 00:07:35.929 Namespace Granularity: Not Supported 00:07:35.929 SQ Associations: Not Supported 00:07:35.929 UUID List: Not Supported 00:07:35.929 Multi-Domain Subsystem: Not Supported 00:07:35.929 Fixed Capacity Management: Not Supported 00:07:35.929 Variable Capacity Management: Not Supported 00:07:35.929 Delete Endurance Group: Not Supported 00:07:35.929 Delete NVM Set: Not Supported 00:07:35.929 Extended LBA Formats Supported: Supported 00:07:35.929 Flexible Data Placement Supported: Not Supported 00:07:35.929 00:07:35.929 Controller Memory Buffer Support 00:07:35.929 ================================ 00:07:35.929 Supported: No 00:07:35.929 00:07:35.929 Persistent Memory Region Support 00:07:35.929 ================================ 00:07:35.929 Supported: No 00:07:35.929 00:07:35.929 Admin Command Set Attributes 00:07:35.929 ============================ 00:07:35.929 Security Send/Receive: Not Supported 00:07:35.929 Format NVM: Supported 00:07:35.929 Firmware Activate/Download: Not Supported 00:07:35.929 Namespace Management: Supported 00:07:35.929 Device Self-Test: Not Supported 00:07:35.929 Directives: Supported 00:07:35.929 NVMe-MI: Not Supported 00:07:35.929 Virtualization Management: Not Supported 00:07:35.929 Doorbell Buffer Config: Supported 00:07:35.929 Get LBA Status Capability: Not Supported 00:07:35.929 Command & Feature Lockdown Capability: Not Supported 00:07:35.929 Abort Command Limit: 4 00:07:35.929 Async Event Request Limit: 4 00:07:35.929 Number of Firmware Slots: N/A 00:07:35.929 Firmware Slot 1 Read-Only: N/A 00:07:35.929 Firmware Activation Without Reset: N/A 00:07:35.929 Multiple Update Detection Support: N/A 00:07:35.929 Firmware Update Granularity: No Information Provided 00:07:35.929 Per-Namespace SMART Log: Yes 00:07:35.929 Asymmetric Namespace Access Log Page: Not Supported 00:07:35.929 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:35.929 Command Effects Log Page: Supported 00:07:35.929 Get Log Page Extended Data: Supported 00:07:35.929 Telemetry Log Pages: Not Supported 00:07:35.929 Persistent Event Log Pages: Not Supported 00:07:35.929 Supported Log Pages Log Page: May Support 00:07:35.929 Commands Supported & Effects Log Page: Not Supported 00:07:35.929 Feature Identifiers & Effects Log Page:May Support 00:07:35.929 NVMe-MI Commands & Effects Log Page: May Support 00:07:35.929 Data Area 4 for Telemetry Log: Not Supported 00:07:35.929 Error Log Page Entries Supported: 1 00:07:35.929 Keep Alive: Not Supported 00:07:35.929 00:07:35.929 NVM Command Set Attributes 00:07:35.929 ========================== 00:07:35.929 Submission Queue Entry Size 00:07:35.929 Max: 64 00:07:35.929 Min: 64 00:07:35.929 Completion Queue Entry Size 00:07:35.929 Max: 16 00:07:35.929 Min: 16 00:07:35.929 Number of Namespaces: 256 00:07:35.929 Compare Command: Supported 00:07:35.929 Write Uncorrectable Command: Not Supported 00:07:35.929 Dataset Management Command: Supported 00:07:35.929 Write Zeroes Command: Supported 00:07:35.929 Set Features Save Field: Supported 00:07:35.929 Reservations: Not Supported 00:07:35.929 Timestamp: Supported 00:07:35.929 Copy: Supported 00:07:35.929 Volatile Write Cache: Present 00:07:35.929 Atomic Write Unit (Normal): 1 00:07:35.929 Atomic Write Unit (PFail): 1 00:07:35.929 Atomic Compare & Write Unit: 1 00:07:35.929 Fused Compare & Write: Not Supported 00:07:35.929 Scatter-Gather List 00:07:35.929 SGL Command Set: Supported 00:07:35.929 SGL Keyed: Not Supported 00:07:35.929 SGL Bit Bucket Descriptor: Not Supported 00:07:35.929 SGL Metadata Pointer: Not Supported 00:07:35.929 Oversized SGL: Not Supported 00:07:35.929 SGL Metadata Address: Not Supported 00:07:35.929 SGL Offset: Not Supported 00:07:35.929 Transport SGL Data Block: Not Supported 00:07:35.929 Replay Protected Memory Block: Not Supported 00:07:35.929 00:07:35.929 Firmware Slot Information 00:07:35.929 ========================= 00:07:35.929 Active slot: 1 00:07:35.929 Slot 1 Firmware Revision: 1.0 00:07:35.929 00:07:35.929 00:07:35.929 Commands Supported and Effects 00:07:35.929 ============================== 00:07:35.929 Admin Commands 00:07:35.929 -------------- 00:07:35.929 Delete I/O Submission Queue (00h): Supported 00:07:35.929 Create I/O Submission Queue (01h): Supported 00:07:35.929 Get Log Page (02h): Supported 00:07:35.929 Delete I/O Completion Queue (04h): Supported 00:07:35.929 Create I/O Completion Queue (05h): Supported 00:07:35.929 Identify (06h): Supported 00:07:35.929 Abort (08h): Supported 00:07:35.929 Set Features (09h): Supported 00:07:35.929 Get Features (0Ah): Supported 00:07:35.929 Asynchronous Event Request (0Ch): Supported 00:07:35.929 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:35.929 Directive Send (19h): Supported 00:07:35.929 Directive Receive (1Ah): Supported 00:07:35.929 Virtualization Management (1Ch): Supported 00:07:35.929 Doorbell Buffer Config (7Ch): Supported 00:07:35.929 Format NVM (80h): Supported LBA-Change 00:07:35.929 I/O Commands 00:07:35.929 ------------ 00:07:35.929 Flush (00h): Supported LBA-Change 00:07:35.929 Write (01h): Supported LBA-Change 00:07:35.929 Read (02h): Supported 00:07:35.929 Compare (05h): Supported 00:07:35.929 Write Zeroes (08h): Supported LBA-Change 00:07:35.929 Dataset Management (09h): Supported LBA-Change 00:07:35.929 Unknown (0Ch): Supported 00:07:35.929 Unknown (12h): Supported 00:07:35.929 Copy (19h): Supported LBA-Change 00:07:35.929 Unknown (1Dh): Supported LBA-Change 00:07:35.929 00:07:35.929 Error Log 00:07:35.929 ========= 00:07:35.929 00:07:35.929 Arbitration 00:07:35.929 =========== 00:07:35.929 Arbitration Burst: no limit 00:07:35.929 00:07:35.929 Power Management 00:07:35.929 ================ 00:07:35.929 Number of Power States: 1 00:07:35.929 Current Power State: Power State #0 00:07:35.929 Power State #0: 00:07:35.929 Max Power: 25.00 W 00:07:35.929 Non-Operational State: Operational 00:07:35.929 Entry Latency: 16 microseconds 00:07:35.929 Exit Latency: 4 microseconds 00:07:35.929 Relative Read Throughput: 0 00:07:35.929 Relative Read Latency: 0 00:07:35.929 Relative Write Throughput: 0 00:07:35.929 Relative Write Latency: 0 00:07:35.929 Idle Power: Not Reported 00:07:35.929 Active Power: Not Reported 00:07:35.929 Non-Operational Permissive Mode: Not Supported 00:07:35.929 00:07:35.929 Health Information 00:07:35.929 ================== 00:07:35.929 Critical Warnings: 00:07:35.929 Available Spare Space: OK 00:07:35.929 Temperature: [2024-11-25 12:05:36.853742] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 63056 terminated unexpected 00:07:35.929 OK 00:07:35.929 Device Reliability: OK 00:07:35.929 Read Only: No 00:07:35.929 Volatile Memory Backup: OK 00:07:35.929 Current Temperature: 323 Kelvin (50 Celsius) 00:07:35.929 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:35.929 Available Spare: 0% 00:07:35.929 Available Spare Threshold: 0% 00:07:35.929 Life Percentage Used: 0% 00:07:35.929 Data Units Read: 954 00:07:35.929 Data Units Written: 827 00:07:35.929 Host Read Commands: 45372 00:07:35.929 Host Write Commands: 44270 00:07:35.929 Controller Busy Time: 0 minutes 00:07:35.929 Power Cycles: 0 00:07:35.929 Power On Hours: 0 hours 00:07:35.929 Unsafe Shutdowns: 0 00:07:35.929 Unrecoverable Media Errors: 0 00:07:35.929 Lifetime Error Log Entries: 0 00:07:35.929 Warning Temperature Time: 0 minutes 00:07:35.929 Critical Temperature Time: 0 minutes 00:07:35.929 00:07:35.930 Number of Queues 00:07:35.930 ================ 00:07:35.930 Number of I/O Submission Queues: 64 00:07:35.930 Number of I/O Completion Queues: 64 00:07:35.930 00:07:35.930 ZNS Specific Controller Data 00:07:35.930 ============================ 00:07:35.930 Zone Append Size Limit: 0 00:07:35.930 00:07:35.930 00:07:35.930 Active Namespaces 00:07:35.930 ================= 00:07:35.930 Namespace ID:1 00:07:35.930 Error Recovery Timeout: Unlimited 00:07:35.930 Command Set Identifier: NVM (00h) 00:07:35.930 Deallocate: Supported 00:07:35.930 Deallocated/Unwritten Error: Supported 00:07:35.930 Deallocated Read Value: All 0x00 00:07:35.930 Deallocate in Write Zeroes: Not Supported 00:07:35.930 Deallocated Guard Field: 0xFFFF 00:07:35.930 Flush: Supported 00:07:35.930 Reservation: Not Supported 00:07:35.930 Namespace Sharing Capabilities: Private 00:07:35.930 Size (in LBAs): 1310720 (5GiB) 00:07:35.930 Capacity (in LBAs): 1310720 (5GiB) 00:07:35.930 Utilization (in LBAs): 1310720 (5GiB) 00:07:35.930 Thin Provisioning: Not Supported 00:07:35.930 Per-NS Atomic Units: No 00:07:35.930 Maximum Single Source Range Length: 128 00:07:35.930 Maximum Copy Length: 128 00:07:35.930 Maximum Source Range Count: 128 00:07:35.930 NGUID/EUI64 Never Reused: No 00:07:35.930 Namespace Write Protected: No 00:07:35.930 Number of LBA Formats: 8 00:07:35.930 Current LBA Format: LBA Format #04 00:07:35.930 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:35.930 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:35.930 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:35.930 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:35.930 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:35.930 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:35.930 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:35.930 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:35.930 00:07:35.930 NVM Specific Namespace Data 00:07:35.930 =========================== 00:07:35.930 Logical Block Storage Tag Mask: 0 00:07:35.930 Protection Information Capabilities: 00:07:35.930 16b Guard Protection Information Storage Tag Support: No 00:07:35.930 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:35.930 Storage Tag Check Read Support: No 00:07:35.930 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.930 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.930 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.930 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.930 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.930 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.930 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.930 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.930 ===================================================== 00:07:35.930 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:35.930 ===================================================== 00:07:35.930 Controller Capabilities/Features 00:07:35.930 ================================ 00:07:35.930 Vendor ID: 1b36 00:07:35.930 Subsystem Vendor ID: 1af4 00:07:35.930 Serial Number: 12343 00:07:35.930 Model Number: QEMU NVMe Ctrl 00:07:35.930 Firmware Version: 8.0.0 00:07:35.930 Recommended Arb Burst: 6 00:07:35.930 IEEE OUI Identifier: 00 54 52 00:07:35.930 Multi-path I/O 00:07:35.930 May have multiple subsystem ports: No 00:07:35.930 May have multiple controllers: Yes 00:07:35.930 Associated with SR-IOV VF: No 00:07:35.930 Max Data Transfer Size: 524288 00:07:35.930 Max Number of Namespaces: 256 00:07:35.930 Max Number of I/O Queues: 64 00:07:35.930 NVMe Specification Version (VS): 1.4 00:07:35.930 NVMe Specification Version (Identify): 1.4 00:07:35.930 Maximum Queue Entries: 2048 00:07:35.930 Contiguous Queues Required: Yes 00:07:35.930 Arbitration Mechanisms Supported 00:07:35.930 Weighted Round Robin: Not Supported 00:07:35.930 Vendor Specific: Not Supported 00:07:35.930 Reset Timeout: 7500 ms 00:07:35.930 Doorbell Stride: 4 bytes 00:07:35.930 NVM Subsystem Reset: Not Supported 00:07:35.930 Command Sets Supported 00:07:35.930 NVM Command Set: Supported 00:07:35.930 Boot Partition: Not Supported 00:07:35.930 Memory Page Size Minimum: 4096 bytes 00:07:35.930 Memory Page Size Maximum: 65536 bytes 00:07:35.930 Persistent Memory Region: Not Supported 00:07:35.930 Optional Asynchronous Events Supported 00:07:35.930 Namespace Attribute Notices: Supported 00:07:35.930 Firmware Activation Notices: Not Supported 00:07:35.930 ANA Change Notices: Not Supported 00:07:35.930 PLE Aggregate Log Change Notices: Not Supported 00:07:35.930 LBA Status Info Alert Notices: Not Supported 00:07:35.930 EGE Aggregate Log Change Notices: Not Supported 00:07:35.930 Normal NVM Subsystem Shutdown event: Not Supported 00:07:35.930 Zone Descriptor Change Notices: Not Supported 00:07:35.930 Discovery Log Change Notices: Not Supported 00:07:35.930 Controller Attributes 00:07:35.930 128-bit Host Identifier: Not Supported 00:07:35.930 Non-Operational Permissive Mode: Not Supported 00:07:35.930 NVM Sets: Not Supported 00:07:35.930 Read Recovery Levels: Not Supported 00:07:35.930 Endurance Groups: Supported 00:07:35.930 Predictable Latency Mode: Not Supported 00:07:35.930 Traffic Based Keep ALive: Not Supported 00:07:35.930 Namespace Granularity: Not Supported 00:07:35.930 SQ Associations: Not Supported 00:07:35.930 UUID List: Not Supported 00:07:35.930 Multi-Domain Subsystem: Not Supported 00:07:35.930 Fixed Capacity Management: Not Supported 00:07:35.930 Variable Capacity Management: Not Supported 00:07:35.930 Delete Endurance Group: Not Supported 00:07:35.930 Delete NVM Set: Not Supported 00:07:35.930 Extended LBA Formats Supported: Supported 00:07:35.930 Flexible Data Placement Supported: Supported 00:07:35.930 00:07:35.930 Controller Memory Buffer Support 00:07:35.930 ================================ 00:07:35.930 Supported: No 00:07:35.930 00:07:35.930 Persistent Memory Region Support 00:07:35.930 ================================ 00:07:35.930 Supported: No 00:07:35.930 00:07:35.930 Admin Command Set Attributes 00:07:35.930 ============================ 00:07:35.930 Security Send/Receive: Not Supported 00:07:35.930 Format NVM: Supported 00:07:35.930 Firmware Activate/Download: Not Supported 00:07:35.930 Namespace Management: Supported 00:07:35.930 Device Self-Test: Not Supported 00:07:35.930 Directives: Supported 00:07:35.930 NVMe-MI: Not Supported 00:07:35.930 Virtualization Management: Not Supported 00:07:35.930 Doorbell Buffer Config: Supported 00:07:35.930 Get LBA Status Capability: Not Supported 00:07:35.930 Command & Feature Lockdown Capability: Not Supported 00:07:35.930 Abort Command Limit: 4 00:07:35.930 Async Event Request Limit: 4 00:07:35.930 Number of Firmware Slots: N/A 00:07:35.930 Firmware Slot 1 Read-Only: N/A 00:07:35.930 Firmware Activation Without Reset: N/A 00:07:35.930 Multiple Update Detection Support: N/A 00:07:35.930 Firmware Update Granularity: No Information Provided 00:07:35.930 Per-Namespace SMART Log: Yes 00:07:35.930 Asymmetric Namespace Access Log Page: Not Supported 00:07:35.930 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:35.930 Command Effects Log Page: Supported 00:07:35.930 Get Log Page Extended Data: Supported 00:07:35.930 Telemetry Log Pages: Not Supported 00:07:35.930 Persistent Event Log Pages: Not Supported 00:07:35.930 Supported Log Pages Log Page: May Support 00:07:35.930 Commands Supported & Effects Log Page: Not Supported 00:07:35.930 Feature Identifiers & Effects Log Page:May Support 00:07:35.930 NVMe-MI Commands & Effects Log Page: May Support 00:07:35.930 Data Area 4 for Telemetry Log: Not Supported 00:07:35.930 Error Log Page Entries Supported: 1 00:07:35.930 Keep Alive: Not Supported 00:07:35.931 00:07:35.931 NVM Command Set Attributes 00:07:35.931 ========================== 00:07:35.931 Submission Queue Entry Size 00:07:35.931 Max: 64 00:07:35.931 Min: 64 00:07:35.931 Completion Queue Entry Size 00:07:35.931 Max: 16 00:07:35.931 Min: 16 00:07:35.931 Number of Namespaces: 256 00:07:35.931 Compare Command: Supported 00:07:35.931 Write Uncorrectable Command: Not Supported 00:07:35.931 Dataset Management Command: Supported 00:07:35.931 Write Zeroes Command: Supported 00:07:35.931 Set Features Save Field: Supported 00:07:35.931 Reservations: Not Supported 00:07:35.931 Timestamp: Supported 00:07:35.931 Copy: Supported 00:07:35.931 Volatile Write Cache: Present 00:07:35.931 Atomic Write Unit (Normal): 1 00:07:35.931 Atomic Write Unit (PFail): 1 00:07:35.931 Atomic Compare & Write Unit: 1 00:07:35.931 Fused Compare & Write: Not Supported 00:07:35.931 Scatter-Gather List 00:07:35.931 SGL Command Set: Supported 00:07:35.931 SGL Keyed: Not Supported 00:07:35.931 SGL Bit Bucket Descriptor: Not Supported 00:07:35.931 SGL Metadata Pointer: Not Supported 00:07:35.931 Oversized SGL: Not Supported 00:07:35.931 SGL Metadata Address: Not Supported 00:07:35.931 SGL Offset: Not Supported 00:07:35.931 Transport SGL Data Block: Not Supported 00:07:35.931 Replay Protected Memory Block: Not Supported 00:07:35.931 00:07:35.931 Firmware Slot Information 00:07:35.931 ========================= 00:07:35.931 Active slot: 1 00:07:35.931 Slot 1 Firmware Revision: 1.0 00:07:35.931 00:07:35.931 00:07:35.931 Commands Supported and Effects 00:07:35.931 ============================== 00:07:35.931 Admin Commands 00:07:35.931 -------------- 00:07:35.931 Delete I/O Submission Queue (00h): Supported 00:07:35.931 Create I/O Submission Queue (01h): Supported 00:07:35.931 Get Log Page (02h): Supported 00:07:35.931 Delete I/O Completion Queue (04h): Supported 00:07:35.931 Create I/O Completion Queue (05h): Supported 00:07:35.931 Identify (06h): Supported 00:07:35.931 Abort (08h): Supported 00:07:35.931 Set Features (09h): Supported 00:07:35.931 Get Features (0Ah): Supported 00:07:35.931 Asynchronous Event Request (0Ch): Supported 00:07:35.931 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:35.931 Directive Send (19h): Supported 00:07:35.931 Directive Receive (1Ah): Supported 00:07:35.931 Virtualization Management (1Ch): Supported 00:07:35.931 Doorbell Buffer Config (7Ch): Supported 00:07:35.931 Format NVM (80h): Supported LBA-Change 00:07:35.931 I/O Commands 00:07:35.931 ------------ 00:07:35.931 Flush (00h): Supported LBA-Change 00:07:35.931 Write (01h): Supported LBA-Change 00:07:35.931 Read (02h): Supported 00:07:35.931 Compare (05h): Supported 00:07:35.931 Write Zeroes (08h): Supported LBA-Change 00:07:35.931 Dataset Management (09h): Supported LBA-Change 00:07:35.931 Unknown (0Ch): Supported 00:07:35.931 Unknown (12h): Supported 00:07:35.931 Copy (19h): Supported LBA-Change 00:07:35.931 Unknown (1Dh): Supported LBA-Change 00:07:35.931 00:07:35.931 Error Log 00:07:35.931 ========= 00:07:35.931 00:07:35.931 Arbitration 00:07:35.931 =========== 00:07:35.931 Arbitration Burst: no limit 00:07:35.931 00:07:35.931 Power Management 00:07:35.931 ================ 00:07:35.931 Number of Power States: 1 00:07:35.931 Current Power State: Power State #0 00:07:35.931 Power State #0: 00:07:35.931 Max Power: 25.00 W 00:07:35.931 Non-Operational State: Operational 00:07:35.931 Entry Latency: 16 microseconds 00:07:35.931 Exit Latency: 4 microseconds 00:07:35.931 Relative Read Throughput: 0 00:07:35.931 Relative Read Latency: 0 00:07:35.931 Relative Write Throughput: 0 00:07:35.931 Relative Write Latency: 0 00:07:35.931 Idle Power: Not Reported 00:07:35.931 Active Power: Not Reported 00:07:35.931 Non-Operational Permissive Mode: Not Supported 00:07:35.931 00:07:35.931 Health Information 00:07:35.931 ================== 00:07:35.931 Critical Warnings: 00:07:35.931 Available Spare Space: OK 00:07:35.931 Temperature: OK 00:07:35.931 Device Reliability: OK 00:07:35.931 Read Only: No 00:07:35.931 Volatile Memory Backup: OK 00:07:35.931 Current Temperature: 323 Kelvin (50 Celsius) 00:07:35.931 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:35.931 Available Spare: 0% 00:07:35.931 Available Spare Threshold: 0% 00:07:35.931 Life Percentage Used: 0% 00:07:35.931 Data Units Read: 716 00:07:35.931 Data Units Written: 645 00:07:35.931 Host Read Commands: 31592 00:07:35.931 Host Write Commands: 31015 00:07:35.931 Controller Busy Time: 0 minutes 00:07:35.931 Power Cycles: 0 00:07:35.931 Power On Hours: 0 hours 00:07:35.931 Unsafe Shutdowns: 0 00:07:35.931 Unrecoverable Media Errors: 0 00:07:35.931 Lifetime Error Log Entries: 0 00:07:35.931 Warning Temperature Time: 0 minutes 00:07:35.931 Critical Temperature Time: 0 minutes 00:07:35.931 00:07:35.931 Number of Queues 00:07:35.931 ================ 00:07:35.931 Number of I/O Submission Queues: 64 00:07:35.931 Number of I/O Completion Queues: 64 00:07:35.931 00:07:35.931 ZNS Specific Controller Data 00:07:35.931 ============================ 00:07:35.931 Zone Append Size Limit: 0 00:07:35.931 00:07:35.931 00:07:35.931 Active Namespaces 00:07:35.931 ================= 00:07:35.931 Namespace ID:1 00:07:35.931 Error Recovery Timeout: Unlimited 00:07:35.931 Command Set Identifier: NVM (00h) 00:07:35.931 Deallocate: Supported 00:07:35.931 Deallocated/Unwritten Error: Supported 00:07:35.931 Deallocated Read Value: All 0x00 00:07:35.931 Deallocate in Write Zeroes: Not Supported 00:07:35.931 Deallocated Guard Field: 0xFFFF 00:07:35.931 Flush: Supported 00:07:35.931 Reservation: Not Supported 00:07:35.931 Namespace Sharing Capabilities: Multiple Controllers 00:07:35.931 Size (in LBAs): 262144 (1GiB) 00:07:35.931 Capacity (in LBAs): 262144 (1GiB) 00:07:35.931 Utilization (in LBAs): 262144 (1GiB) 00:07:35.931 Thin Provisioning: Not Supported 00:07:35.931 Per-NS Atomic Units: No 00:07:35.931 Maximum Single Source Range Length: 128 00:07:35.931 Maximum Copy Length: 128 00:07:35.931 Maximum Source Range Count: 128 00:07:35.931 NGUID/EUI64 Never Reused: No 00:07:35.931 Namespace Write Protected: No 00:07:35.931 Endurance group ID: 1 00:07:35.931 Number of LBA Formats: 8 00:07:35.931 Current LBA Format: LBA Format #04 00:07:35.931 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:35.931 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:35.931 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:35.931 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:35.931 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:35.931 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:35.931 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:35.931 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:35.931 00:07:35.931 Get Feature FDP: 00:07:35.931 ================ 00:07:35.931 Enabled: Yes 00:07:35.931 FDP configuration index: 0 00:07:35.931 00:07:35.931 FDP configurations log page 00:07:35.931 =========================== 00:07:35.931 Number of FDP configurations: 1 00:07:35.931 Version: 0 00:07:35.931 Size: 112 00:07:35.931 FDP Configuration Descriptor: 0 00:07:35.931 Descriptor Size: 96 00:07:35.931 Reclaim Group Identifier format: 2 00:07:35.931 FDP Volatile Write Cache: Not Present 00:07:35.931 FDP Configuration: Valid 00:07:35.931 Vendor Specific Size: 0 00:07:35.931 Number of Reclaim Groups: 2 00:07:35.931 Number of Recalim Unit Handles: 8 00:07:35.931 Max Placement Identifiers: 128 00:07:35.931 Number of Namespaces Suppprted: 256 00:07:35.931 Reclaim unit Nominal Size: 6000000 bytes 00:07:35.931 Estimated Reclaim Unit Time Limit: Not Reported 00:07:35.931 RUH Desc #000: RUH Type: Initially Isolated 00:07:35.931 RUH Desc #001: RUH Type: Initially Isolated 00:07:35.931 RUH Desc #002: RUH Type: Initially Isolated 00:07:35.931 RUH Desc #003: RUH Type: Initially Isolated 00:07:35.931 RUH Desc #004: RUH Type: Initially Isolated 00:07:35.931 RUH Desc #005: RUH Type: Initially Isolated 00:07:35.931 RUH Desc #006: RUH Type: Initially Isolated 00:07:35.931 RUH Desc #007: RUH Type: Initially Isolated 00:07:35.931 00:07:35.931 FDP reclaim unit handle usage log page 00:07:35.931 ====================================== 00:07:35.931 Number of Reclaim Unit Handles: 8 00:07:35.931 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:35.931 RUH Usage Desc #001: RUH Attributes: Unused 00:07:35.931 RUH Usage Desc #002: RUH Attributes: Unused 00:07:35.931 RUH Usage Desc #003: RUH Attributes: Unused 00:07:35.931 RUH Usage Desc #004: RUH Attributes: Unused 00:07:35.931 RUH Usage Desc #005: RUH Attributes: Unused 00:07:35.931 RUH Usage Desc #006: RUH Attributes: Unused 00:07:35.932 RUH Usage Desc #007: RUH Attributes: Unused 00:07:35.932 00:07:35.932 FDP statistics log page 00:07:35.932 ======================= 00:07:35.932 Host bytes with metadata written: 413704192 00:07:35.932 Media[2024-11-25 12:05:36.854849] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 63056 terminated unexpected 00:07:35.932 bytes with metadata written: 413749248 00:07:35.932 Media bytes erased: 0 00:07:35.932 00:07:35.932 FDP events log page 00:07:35.932 =================== 00:07:35.932 Number of FDP events: 0 00:07:35.932 00:07:35.932 NVM Specific Namespace Data 00:07:35.932 =========================== 00:07:35.932 Logical Block Storage Tag Mask: 0 00:07:35.932 Protection Information Capabilities: 00:07:35.932 16b Guard Protection Information Storage Tag Support: No 00:07:35.932 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:35.932 Storage Tag Check Read Support: No 00:07:35.932 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.932 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.932 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.932 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.932 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.932 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.932 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.932 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.932 ===================================================== 00:07:35.932 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:35.932 ===================================================== 00:07:35.932 Controller Capabilities/Features 00:07:35.932 ================================ 00:07:35.932 Vendor ID: 1b36 00:07:35.932 Subsystem Vendor ID: 1af4 00:07:35.932 Serial Number: 12342 00:07:35.932 Model Number: QEMU NVMe Ctrl 00:07:35.932 Firmware Version: 8.0.0 00:07:35.932 Recommended Arb Burst: 6 00:07:35.932 IEEE OUI Identifier: 00 54 52 00:07:35.932 Multi-path I/O 00:07:35.932 May have multiple subsystem ports: No 00:07:35.932 May have multiple controllers: No 00:07:35.932 Associated with SR-IOV VF: No 00:07:35.932 Max Data Transfer Size: 524288 00:07:35.932 Max Number of Namespaces: 256 00:07:35.932 Max Number of I/O Queues: 64 00:07:35.932 NVMe Specification Version (VS): 1.4 00:07:35.932 NVMe Specification Version (Identify): 1.4 00:07:35.932 Maximum Queue Entries: 2048 00:07:35.932 Contiguous Queues Required: Yes 00:07:35.932 Arbitration Mechanisms Supported 00:07:35.932 Weighted Round Robin: Not Supported 00:07:35.932 Vendor Specific: Not Supported 00:07:35.932 Reset Timeout: 7500 ms 00:07:35.932 Doorbell Stride: 4 bytes 00:07:35.932 NVM Subsystem Reset: Not Supported 00:07:35.932 Command Sets Supported 00:07:35.932 NVM Command Set: Supported 00:07:35.932 Boot Partition: Not Supported 00:07:35.932 Memory Page Size Minimum: 4096 bytes 00:07:35.932 Memory Page Size Maximum: 65536 bytes 00:07:35.932 Persistent Memory Region: Not Supported 00:07:35.932 Optional Asynchronous Events Supported 00:07:35.932 Namespace Attribute Notices: Supported 00:07:35.932 Firmware Activation Notices: Not Supported 00:07:35.932 ANA Change Notices: Not Supported 00:07:35.932 PLE Aggregate Log Change Notices: Not Supported 00:07:35.932 LBA Status Info Alert Notices: Not Supported 00:07:35.932 EGE Aggregate Log Change Notices: Not Supported 00:07:35.932 Normal NVM Subsystem Shutdown event: Not Supported 00:07:35.932 Zone Descriptor Change Notices: Not Supported 00:07:35.932 Discovery Log Change Notices: Not Supported 00:07:35.932 Controller Attributes 00:07:35.932 128-bit Host Identifier: Not Supported 00:07:35.932 Non-Operational Permissive Mode: Not Supported 00:07:35.932 NVM Sets: Not Supported 00:07:35.932 Read Recovery Levels: Not Supported 00:07:35.932 Endurance Groups: Not Supported 00:07:35.932 Predictable Latency Mode: Not Supported 00:07:35.932 Traffic Based Keep ALive: Not Supported 00:07:35.932 Namespace Granularity: Not Supported 00:07:35.932 SQ Associations: Not Supported 00:07:35.932 UUID List: Not Supported 00:07:35.932 Multi-Domain Subsystem: Not Supported 00:07:35.932 Fixed Capacity Management: Not Supported 00:07:35.932 Variable Capacity Management: Not Supported 00:07:35.932 Delete Endurance Group: Not Supported 00:07:35.932 Delete NVM Set: Not Supported 00:07:35.932 Extended LBA Formats Supported: Supported 00:07:35.932 Flexible Data Placement Supported: Not Supported 00:07:35.932 00:07:35.932 Controller Memory Buffer Support 00:07:35.932 ================================ 00:07:35.932 Supported: No 00:07:35.932 00:07:35.932 Persistent Memory Region Support 00:07:35.932 ================================ 00:07:35.932 Supported: No 00:07:35.932 00:07:35.932 Admin Command Set Attributes 00:07:35.932 ============================ 00:07:35.932 Security Send/Receive: Not Supported 00:07:35.932 Format NVM: Supported 00:07:35.932 Firmware Activate/Download: Not Supported 00:07:35.932 Namespace Management: Supported 00:07:35.932 Device Self-Test: Not Supported 00:07:35.932 Directives: Supported 00:07:35.932 NVMe-MI: Not Supported 00:07:35.932 Virtualization Management: Not Supported 00:07:35.932 Doorbell Buffer Config: Supported 00:07:35.932 Get LBA Status Capability: Not Supported 00:07:35.932 Command & Feature Lockdown Capability: Not Supported 00:07:35.932 Abort Command Limit: 4 00:07:35.932 Async Event Request Limit: 4 00:07:35.932 Number of Firmware Slots: N/A 00:07:35.932 Firmware Slot 1 Read-Only: N/A 00:07:35.932 Firmware Activation Without Reset: N/A 00:07:35.932 Multiple Update Detection Support: N/A 00:07:35.932 Firmware Update Granularity: No Information Provided 00:07:35.932 Per-Namespace SMART Log: Yes 00:07:35.932 Asymmetric Namespace Access Log Page: Not Supported 00:07:35.932 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:35.932 Command Effects Log Page: Supported 00:07:35.932 Get Log Page Extended Data: Supported 00:07:35.932 Telemetry Log Pages: Not Supported 00:07:35.932 Persistent Event Log Pages: Not Supported 00:07:35.932 Supported Log Pages Log Page: May Support 00:07:35.932 Commands Supported & Effects Log Page: Not Supported 00:07:35.932 Feature Identifiers & Effects Log Page:May Support 00:07:35.932 NVMe-MI Commands & Effects Log Page: May Support 00:07:35.932 Data Area 4 for Telemetry Log: Not Supported 00:07:35.932 Error Log Page Entries Supported: 1 00:07:35.932 Keep Alive: Not Supported 00:07:35.932 00:07:35.932 NVM Command Set Attributes 00:07:35.932 ========================== 00:07:35.932 Submission Queue Entry Size 00:07:35.932 Max: 64 00:07:35.932 Min: 64 00:07:35.932 Completion Queue Entry Size 00:07:35.932 Max: 16 00:07:35.932 Min: 16 00:07:35.932 Number of Namespaces: 256 00:07:35.932 Compare Command: Supported 00:07:35.932 Write Uncorrectable Command: Not Supported 00:07:35.932 Dataset Management Command: Supported 00:07:35.932 Write Zeroes Command: Supported 00:07:35.932 Set Features Save Field: Supported 00:07:35.932 Reservations: Not Supported 00:07:35.932 Timestamp: Supported 00:07:35.932 Copy: Supported 00:07:35.932 Volatile Write Cache: Present 00:07:35.932 Atomic Write Unit (Normal): 1 00:07:35.932 Atomic Write Unit (PFail): 1 00:07:35.932 Atomic Compare & Write Unit: 1 00:07:35.932 Fused Compare & Write: Not Supported 00:07:35.932 Scatter-Gather List 00:07:35.932 SGL Command Set: Supported 00:07:35.932 SGL Keyed: Not Supported 00:07:35.932 SGL Bit Bucket Descriptor: Not Supported 00:07:35.932 SGL Metadata Pointer: Not Supported 00:07:35.932 Oversized SGL: Not Supported 00:07:35.932 SGL Metadata Address: Not Supported 00:07:35.932 SGL Offset: Not Supported 00:07:35.933 Transport SGL Data Block: Not Supported 00:07:35.933 Replay Protected Memory Block: Not Supported 00:07:35.933 00:07:35.933 Firmware Slot Information 00:07:35.933 ========================= 00:07:35.933 Active slot: 1 00:07:35.933 Slot 1 Firmware Revision: 1.0 00:07:35.933 00:07:35.933 00:07:35.933 Commands Supported and Effects 00:07:35.933 ============================== 00:07:35.933 Admin Commands 00:07:35.933 -------------- 00:07:35.933 Delete I/O Submission Queue (00h): Supported 00:07:35.933 Create I/O Submission Queue (01h): Supported 00:07:35.933 Get Log Page (02h): Supported 00:07:35.933 Delete I/O Completion Queue (04h): Supported 00:07:35.933 Create I/O Completion Queue (05h): Supported 00:07:35.933 Identify (06h): Supported 00:07:35.933 Abort (08h): Supported 00:07:35.933 Set Features (09h): Supported 00:07:35.933 Get Features (0Ah): Supported 00:07:35.933 Asynchronous Event Request (0Ch): Supported 00:07:35.933 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:35.933 Directive Send (19h): Supported 00:07:35.933 Directive Receive (1Ah): Supported 00:07:35.933 Virtualization Management (1Ch): Supported 00:07:35.933 Doorbell Buffer Config (7Ch): Supported 00:07:35.933 Format NVM (80h): Supported LBA-Change 00:07:35.933 I/O Commands 00:07:35.933 ------------ 00:07:35.933 Flush (00h): Supported LBA-Change 00:07:35.933 Write (01h): Supported LBA-Change 00:07:35.933 Read (02h): Supported 00:07:35.933 Compare (05h): Supported 00:07:35.933 Write Zeroes (08h): Supported LBA-Change 00:07:35.933 Dataset Management (09h): Supported LBA-Change 00:07:35.933 Unknown (0Ch): Supported 00:07:35.933 Unknown (12h): Supported 00:07:35.933 Copy (19h): Supported LBA-Change 00:07:35.933 Unknown (1Dh): Supported LBA-Change 00:07:35.933 00:07:35.933 Error Log 00:07:35.933 ========= 00:07:35.933 00:07:35.933 Arbitration 00:07:35.933 =========== 00:07:35.933 Arbitration Burst: no limit 00:07:35.933 00:07:35.933 Power Management 00:07:35.933 ================ 00:07:35.933 Number of Power States: 1 00:07:35.933 Current Power State: Power State #0 00:07:35.933 Power State #0: 00:07:35.933 Max Power: 25.00 W 00:07:35.933 Non-Operational State: Operational 00:07:35.933 Entry Latency: 16 microseconds 00:07:35.933 Exit Latency: 4 microseconds 00:07:35.933 Relative Read Throughput: 0 00:07:35.933 Relative Read Latency: 0 00:07:35.933 Relative Write Throughput: 0 00:07:35.933 Relative Write Latency: 0 00:07:35.933 Idle Power: Not Reported 00:07:35.933 Active Power: Not Reported 00:07:35.933 Non-Operational Permissive Mode: Not Supported 00:07:35.933 00:07:35.933 Health Information 00:07:35.933 ================== 00:07:35.933 Critical Warnings: 00:07:35.933 Available Spare Space: OK 00:07:35.933 Temperature: OK 00:07:35.933 Device Reliability: OK 00:07:35.933 Read Only: No 00:07:35.933 Volatile Memory Backup: OK 00:07:35.933 Current Temperature: 323 Kelvin (50 Celsius) 00:07:35.933 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:35.933 Available Spare: 0% 00:07:35.933 Available Spare Threshold: 0% 00:07:35.933 Life Percentage Used: 0% 00:07:35.933 Data Units Read: 1993 00:07:35.933 Data Units Written: 1780 00:07:35.933 Host Read Commands: 93331 00:07:35.933 Host Write Commands: 91600 00:07:35.933 Controller Busy Time: 0 minutes 00:07:35.933 Power Cycles: 0 00:07:35.933 Power On Hours: 0 hours 00:07:35.933 Unsafe Shutdowns: 0 00:07:35.933 Unrecoverable Media Errors: 0 00:07:35.933 Lifetime Error Log Entries: 0 00:07:35.933 Warning Temperature Time: 0 minutes 00:07:35.933 Critical Temperature Time: 0 minutes 00:07:35.933 00:07:35.933 Number of Queues 00:07:35.933 ================ 00:07:35.933 Number of I/O Submission Queues: 64 00:07:35.933 Number of I/O Completion Queues: 64 00:07:35.933 00:07:35.933 ZNS Specific Controller Data 00:07:35.933 ============================ 00:07:35.933 Zone Append Size Limit: 0 00:07:35.933 00:07:35.933 00:07:35.933 Active Namespaces 00:07:35.933 ================= 00:07:35.933 Namespace ID:1 00:07:35.933 Error Recovery Timeout: Unlimited 00:07:35.933 Command Set Identifier: NVM (00h) 00:07:35.933 Deallocate: Supported 00:07:35.933 Deallocated/Unwritten Error: Supported 00:07:35.933 Deallocated Read Value: All 0x00 00:07:35.933 Deallocate in Write Zeroes: Not Supported 00:07:35.933 Deallocated Guard Field: 0xFFFF 00:07:35.933 Flush: Supported 00:07:35.933 Reservation: Not Supported 00:07:35.933 Namespace Sharing Capabilities: Private 00:07:35.933 Size (in LBAs): 1048576 (4GiB) 00:07:35.933 Capacity (in LBAs): 1048576 (4GiB) 00:07:35.933 Utilization (in LBAs): 1048576 (4GiB) 00:07:35.933 Thin Provisioning: Not Supported 00:07:35.933 Per-NS Atomic Units: No 00:07:35.933 Maximum Single Source Range Length: 128 00:07:35.933 Maximum Copy Length: 128 00:07:35.933 Maximum Source Range Count: 128 00:07:35.933 NGUID/EUI64 Never Reused: No 00:07:35.933 Namespace Write Protected: No 00:07:35.933 Number of LBA Formats: 8 00:07:35.933 Current LBA Format: LBA Format #04 00:07:35.933 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:35.933 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:35.933 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:35.933 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:35.933 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:35.933 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:35.933 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:35.933 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:35.933 00:07:35.933 NVM Specific Namespace Data 00:07:35.933 =========================== 00:07:35.933 Logical Block Storage Tag Mask: 0 00:07:35.933 Protection Information Capabilities: 00:07:35.933 16b Guard Protection Information Storage Tag Support: No 00:07:35.933 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:35.933 Storage Tag Check Read Support: No 00:07:35.933 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.933 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.933 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.933 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.933 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.933 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.933 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.933 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.933 Namespace ID:2 00:07:35.933 Error Recovery Timeout: Unlimited 00:07:35.933 Command Set Identifier: NVM (00h) 00:07:35.933 Deallocate: Supported 00:07:35.933 Deallocated/Unwritten Error: Supported 00:07:35.933 Deallocated Read Value: All 0x00 00:07:35.933 Deallocate in Write Zeroes: Not Supported 00:07:35.933 Deallocated Guard Field: 0xFFFF 00:07:35.933 Flush: Supported 00:07:35.933 Reservation: Not Supported 00:07:35.933 Namespace Sharing Capabilities: Private 00:07:35.933 Size (in LBAs): 1048576 (4GiB) 00:07:35.933 Capacity (in LBAs): 1048576 (4GiB) 00:07:35.933 Utilization (in LBAs): 1048576 (4GiB) 00:07:35.933 Thin Provisioning: Not Supported 00:07:35.933 Per-NS Atomic Units: No 00:07:35.933 Maximum Single Source Range Length: 128 00:07:35.933 Maximum Copy Length: 128 00:07:35.933 Maximum Source Range Count: 128 00:07:35.933 NGUID/EUI64 Never Reused: No 00:07:35.933 Namespace Write Protected: No 00:07:35.933 Number of LBA Formats: 8 00:07:35.933 Current LBA Format: LBA Format #04 00:07:35.934 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:35.934 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:35.934 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:35.934 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:35.934 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:35.934 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:35.934 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:35.934 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:35.934 00:07:35.934 NVM Specific Namespace Data 00:07:35.934 =========================== 00:07:35.934 Logical Block Storage Tag Mask: 0 00:07:35.934 Protection Information Capabilities: 00:07:35.934 16b Guard Protection Information Storage Tag Support: No 00:07:35.934 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:35.934 Storage Tag Check Read Support: No 00:07:35.934 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.934 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.934 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.934 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.934 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.934 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.934 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.934 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.934 Namespace ID:3 00:07:35.934 Error Recovery Timeout: Unlimited 00:07:35.934 Command Set Identifier: NVM (00h) 00:07:35.934 Deallocate: Supported 00:07:35.934 Deallocated/Unwritten Error: Supported 00:07:35.934 Deallocated Read Value: All 0x00 00:07:35.934 Deallocate in Write Zeroes: Not Supported 00:07:35.934 Deallocated Guard Field: 0xFFFF 00:07:35.934 Flush: Supported 00:07:35.934 Reservation: Not Supported 00:07:35.934 Namespace Sharing Capabilities: Private 00:07:35.934 Size (in LBAs): 1048576 (4GiB) 00:07:35.934 Capacity (in LBAs): 1048576 (4GiB) 00:07:35.934 Utilization (in LBAs): 1048576 (4GiB) 00:07:35.934 Thin Provisioning: Not Supported 00:07:35.934 Per-NS Atomic Units: No 00:07:35.934 Maximum Single Source Range Length: 128 00:07:35.934 Maximum Copy Length: 128 00:07:35.934 Maximum Source Range Count: 128 00:07:35.934 NGUID/EUI64 Never Reused: No 00:07:35.934 Namespace Write Protected: No 00:07:35.934 Number of LBA Formats: 8 00:07:35.934 Current LBA Format: LBA Format #04 00:07:35.934 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:35.934 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:35.934 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:35.934 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:35.934 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:35.934 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:35.934 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:35.934 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:35.934 00:07:35.934 NVM Specific Namespace Data 00:07:35.934 =========================== 00:07:35.934 Logical Block Storage Tag Mask: 0 00:07:35.934 Protection Information Capabilities: 00:07:35.934 16b Guard Protection Information Storage Tag Support: No 00:07:35.934 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:35.934 Storage Tag Check Read Support: No 00:07:35.934 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.934 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.934 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.934 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.934 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.934 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.934 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.934 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:35.934 12:05:36 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:35.934 12:05:36 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:07:36.193 ===================================================== 00:07:36.193 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:36.193 ===================================================== 00:07:36.193 Controller Capabilities/Features 00:07:36.193 ================================ 00:07:36.193 Vendor ID: 1b36 00:07:36.193 Subsystem Vendor ID: 1af4 00:07:36.193 Serial Number: 12340 00:07:36.193 Model Number: QEMU NVMe Ctrl 00:07:36.193 Firmware Version: 8.0.0 00:07:36.193 Recommended Arb Burst: 6 00:07:36.193 IEEE OUI Identifier: 00 54 52 00:07:36.193 Multi-path I/O 00:07:36.193 May have multiple subsystem ports: No 00:07:36.193 May have multiple controllers: No 00:07:36.193 Associated with SR-IOV VF: No 00:07:36.193 Max Data Transfer Size: 524288 00:07:36.193 Max Number of Namespaces: 256 00:07:36.193 Max Number of I/O Queues: 64 00:07:36.193 NVMe Specification Version (VS): 1.4 00:07:36.193 NVMe Specification Version (Identify): 1.4 00:07:36.193 Maximum Queue Entries: 2048 00:07:36.193 Contiguous Queues Required: Yes 00:07:36.193 Arbitration Mechanisms Supported 00:07:36.193 Weighted Round Robin: Not Supported 00:07:36.193 Vendor Specific: Not Supported 00:07:36.193 Reset Timeout: 7500 ms 00:07:36.193 Doorbell Stride: 4 bytes 00:07:36.193 NVM Subsystem Reset: Not Supported 00:07:36.193 Command Sets Supported 00:07:36.193 NVM Command Set: Supported 00:07:36.193 Boot Partition: Not Supported 00:07:36.193 Memory Page Size Minimum: 4096 bytes 00:07:36.193 Memory Page Size Maximum: 65536 bytes 00:07:36.193 Persistent Memory Region: Not Supported 00:07:36.193 Optional Asynchronous Events Supported 00:07:36.193 Namespace Attribute Notices: Supported 00:07:36.193 Firmware Activation Notices: Not Supported 00:07:36.193 ANA Change Notices: Not Supported 00:07:36.193 PLE Aggregate Log Change Notices: Not Supported 00:07:36.193 LBA Status Info Alert Notices: Not Supported 00:07:36.193 EGE Aggregate Log Change Notices: Not Supported 00:07:36.193 Normal NVM Subsystem Shutdown event: Not Supported 00:07:36.193 Zone Descriptor Change Notices: Not Supported 00:07:36.193 Discovery Log Change Notices: Not Supported 00:07:36.193 Controller Attributes 00:07:36.193 128-bit Host Identifier: Not Supported 00:07:36.193 Non-Operational Permissive Mode: Not Supported 00:07:36.193 NVM Sets: Not Supported 00:07:36.193 Read Recovery Levels: Not Supported 00:07:36.193 Endurance Groups: Not Supported 00:07:36.193 Predictable Latency Mode: Not Supported 00:07:36.193 Traffic Based Keep ALive: Not Supported 00:07:36.193 Namespace Granularity: Not Supported 00:07:36.193 SQ Associations: Not Supported 00:07:36.193 UUID List: Not Supported 00:07:36.193 Multi-Domain Subsystem: Not Supported 00:07:36.193 Fixed Capacity Management: Not Supported 00:07:36.193 Variable Capacity Management: Not Supported 00:07:36.193 Delete Endurance Group: Not Supported 00:07:36.193 Delete NVM Set: Not Supported 00:07:36.193 Extended LBA Formats Supported: Supported 00:07:36.193 Flexible Data Placement Supported: Not Supported 00:07:36.193 00:07:36.193 Controller Memory Buffer Support 00:07:36.193 ================================ 00:07:36.193 Supported: No 00:07:36.193 00:07:36.193 Persistent Memory Region Support 00:07:36.193 ================================ 00:07:36.193 Supported: No 00:07:36.193 00:07:36.193 Admin Command Set Attributes 00:07:36.193 ============================ 00:07:36.193 Security Send/Receive: Not Supported 00:07:36.193 Format NVM: Supported 00:07:36.193 Firmware Activate/Download: Not Supported 00:07:36.193 Namespace Management: Supported 00:07:36.193 Device Self-Test: Not Supported 00:07:36.193 Directives: Supported 00:07:36.193 NVMe-MI: Not Supported 00:07:36.193 Virtualization Management: Not Supported 00:07:36.193 Doorbell Buffer Config: Supported 00:07:36.193 Get LBA Status Capability: Not Supported 00:07:36.193 Command & Feature Lockdown Capability: Not Supported 00:07:36.193 Abort Command Limit: 4 00:07:36.193 Async Event Request Limit: 4 00:07:36.193 Number of Firmware Slots: N/A 00:07:36.193 Firmware Slot 1 Read-Only: N/A 00:07:36.193 Firmware Activation Without Reset: N/A 00:07:36.193 Multiple Update Detection Support: N/A 00:07:36.193 Firmware Update Granularity: No Information Provided 00:07:36.193 Per-Namespace SMART Log: Yes 00:07:36.193 Asymmetric Namespace Access Log Page: Not Supported 00:07:36.193 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:36.193 Command Effects Log Page: Supported 00:07:36.193 Get Log Page Extended Data: Supported 00:07:36.193 Telemetry Log Pages: Not Supported 00:07:36.193 Persistent Event Log Pages: Not Supported 00:07:36.193 Supported Log Pages Log Page: May Support 00:07:36.193 Commands Supported & Effects Log Page: Not Supported 00:07:36.193 Feature Identifiers & Effects Log Page:May Support 00:07:36.193 NVMe-MI Commands & Effects Log Page: May Support 00:07:36.193 Data Area 4 for Telemetry Log: Not Supported 00:07:36.193 Error Log Page Entries Supported: 1 00:07:36.193 Keep Alive: Not Supported 00:07:36.193 00:07:36.193 NVM Command Set Attributes 00:07:36.193 ========================== 00:07:36.193 Submission Queue Entry Size 00:07:36.193 Max: 64 00:07:36.193 Min: 64 00:07:36.193 Completion Queue Entry Size 00:07:36.193 Max: 16 00:07:36.193 Min: 16 00:07:36.193 Number of Namespaces: 256 00:07:36.193 Compare Command: Supported 00:07:36.193 Write Uncorrectable Command: Not Supported 00:07:36.193 Dataset Management Command: Supported 00:07:36.193 Write Zeroes Command: Supported 00:07:36.193 Set Features Save Field: Supported 00:07:36.193 Reservations: Not Supported 00:07:36.193 Timestamp: Supported 00:07:36.193 Copy: Supported 00:07:36.193 Volatile Write Cache: Present 00:07:36.193 Atomic Write Unit (Normal): 1 00:07:36.193 Atomic Write Unit (PFail): 1 00:07:36.193 Atomic Compare & Write Unit: 1 00:07:36.193 Fused Compare & Write: Not Supported 00:07:36.193 Scatter-Gather List 00:07:36.193 SGL Command Set: Supported 00:07:36.193 SGL Keyed: Not Supported 00:07:36.193 SGL Bit Bucket Descriptor: Not Supported 00:07:36.193 SGL Metadata Pointer: Not Supported 00:07:36.193 Oversized SGL: Not Supported 00:07:36.193 SGL Metadata Address: Not Supported 00:07:36.193 SGL Offset: Not Supported 00:07:36.193 Transport SGL Data Block: Not Supported 00:07:36.193 Replay Protected Memory Block: Not Supported 00:07:36.193 00:07:36.193 Firmware Slot Information 00:07:36.193 ========================= 00:07:36.193 Active slot: 1 00:07:36.193 Slot 1 Firmware Revision: 1.0 00:07:36.193 00:07:36.193 00:07:36.193 Commands Supported and Effects 00:07:36.193 ============================== 00:07:36.193 Admin Commands 00:07:36.193 -------------- 00:07:36.194 Delete I/O Submission Queue (00h): Supported 00:07:36.194 Create I/O Submission Queue (01h): Supported 00:07:36.194 Get Log Page (02h): Supported 00:07:36.194 Delete I/O Completion Queue (04h): Supported 00:07:36.194 Create I/O Completion Queue (05h): Supported 00:07:36.194 Identify (06h): Supported 00:07:36.194 Abort (08h): Supported 00:07:36.194 Set Features (09h): Supported 00:07:36.194 Get Features (0Ah): Supported 00:07:36.194 Asynchronous Event Request (0Ch): Supported 00:07:36.194 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:36.194 Directive Send (19h): Supported 00:07:36.194 Directive Receive (1Ah): Supported 00:07:36.194 Virtualization Management (1Ch): Supported 00:07:36.194 Doorbell Buffer Config (7Ch): Supported 00:07:36.194 Format NVM (80h): Supported LBA-Change 00:07:36.194 I/O Commands 00:07:36.194 ------------ 00:07:36.194 Flush (00h): Supported LBA-Change 00:07:36.194 Write (01h): Supported LBA-Change 00:07:36.194 Read (02h): Supported 00:07:36.194 Compare (05h): Supported 00:07:36.194 Write Zeroes (08h): Supported LBA-Change 00:07:36.194 Dataset Management (09h): Supported LBA-Change 00:07:36.194 Unknown (0Ch): Supported 00:07:36.194 Unknown (12h): Supported 00:07:36.194 Copy (19h): Supported LBA-Change 00:07:36.194 Unknown (1Dh): Supported LBA-Change 00:07:36.194 00:07:36.194 Error Log 00:07:36.194 ========= 00:07:36.194 00:07:36.194 Arbitration 00:07:36.194 =========== 00:07:36.194 Arbitration Burst: no limit 00:07:36.194 00:07:36.194 Power Management 00:07:36.194 ================ 00:07:36.194 Number of Power States: 1 00:07:36.194 Current Power State: Power State #0 00:07:36.194 Power State #0: 00:07:36.194 Max Power: 25.00 W 00:07:36.194 Non-Operational State: Operational 00:07:36.194 Entry Latency: 16 microseconds 00:07:36.194 Exit Latency: 4 microseconds 00:07:36.194 Relative Read Throughput: 0 00:07:36.194 Relative Read Latency: 0 00:07:36.194 Relative Write Throughput: 0 00:07:36.194 Relative Write Latency: 0 00:07:36.194 Idle Power: Not Reported 00:07:36.194 Active Power: Not Reported 00:07:36.194 Non-Operational Permissive Mode: Not Supported 00:07:36.194 00:07:36.194 Health Information 00:07:36.194 ================== 00:07:36.194 Critical Warnings: 00:07:36.194 Available Spare Space: OK 00:07:36.194 Temperature: OK 00:07:36.194 Device Reliability: OK 00:07:36.194 Read Only: No 00:07:36.194 Volatile Memory Backup: OK 00:07:36.194 Current Temperature: 323 Kelvin (50 Celsius) 00:07:36.194 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:36.194 Available Spare: 0% 00:07:36.194 Available Spare Threshold: 0% 00:07:36.194 Life Percentage Used: 0% 00:07:36.194 Data Units Read: 635 00:07:36.194 Data Units Written: 563 00:07:36.194 Host Read Commands: 30536 00:07:36.194 Host Write Commands: 30322 00:07:36.194 Controller Busy Time: 0 minutes 00:07:36.194 Power Cycles: 0 00:07:36.194 Power On Hours: 0 hours 00:07:36.194 Unsafe Shutdowns: 0 00:07:36.194 Unrecoverable Media Errors: 0 00:07:36.194 Lifetime Error Log Entries: 0 00:07:36.194 Warning Temperature Time: 0 minutes 00:07:36.194 Critical Temperature Time: 0 minutes 00:07:36.194 00:07:36.194 Number of Queues 00:07:36.194 ================ 00:07:36.194 Number of I/O Submission Queues: 64 00:07:36.194 Number of I/O Completion Queues: 64 00:07:36.194 00:07:36.194 ZNS Specific Controller Data 00:07:36.194 ============================ 00:07:36.194 Zone Append Size Limit: 0 00:07:36.194 00:07:36.194 00:07:36.194 Active Namespaces 00:07:36.194 ================= 00:07:36.194 Namespace ID:1 00:07:36.194 Error Recovery Timeout: Unlimited 00:07:36.194 Command Set Identifier: NVM (00h) 00:07:36.194 Deallocate: Supported 00:07:36.194 Deallocated/Unwritten Error: Supported 00:07:36.194 Deallocated Read Value: All 0x00 00:07:36.194 Deallocate in Write Zeroes: Not Supported 00:07:36.194 Deallocated Guard Field: 0xFFFF 00:07:36.194 Flush: Supported 00:07:36.194 Reservation: Not Supported 00:07:36.194 Metadata Transferred as: Separate Metadata Buffer 00:07:36.194 Namespace Sharing Capabilities: Private 00:07:36.194 Size (in LBAs): 1548666 (5GiB) 00:07:36.194 Capacity (in LBAs): 1548666 (5GiB) 00:07:36.194 Utilization (in LBAs): 1548666 (5GiB) 00:07:36.194 Thin Provisioning: Not Supported 00:07:36.194 Per-NS Atomic Units: No 00:07:36.194 Maximum Single Source Range Length: 128 00:07:36.194 Maximum Copy Length: 128 00:07:36.194 Maximum Source Range Count: 128 00:07:36.194 NGUID/EUI64 Never Reused: No 00:07:36.194 Namespace Write Protected: No 00:07:36.194 Number of LBA Formats: 8 00:07:36.194 Current LBA Format: LBA Format #07 00:07:36.194 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:36.194 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:36.194 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:36.194 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:36.194 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:36.194 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:36.194 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:36.194 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:36.194 00:07:36.194 NVM Specific Namespace Data 00:07:36.194 =========================== 00:07:36.194 Logical Block Storage Tag Mask: 0 00:07:36.194 Protection Information Capabilities: 00:07:36.194 16b Guard Protection Information Storage Tag Support: No 00:07:36.194 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:36.194 Storage Tag Check Read Support: No 00:07:36.194 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.194 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.194 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.194 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.194 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.194 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.194 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.194 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.194 12:05:37 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:36.194 12:05:37 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:07:36.452 ===================================================== 00:07:36.452 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:36.452 ===================================================== 00:07:36.452 Controller Capabilities/Features 00:07:36.452 ================================ 00:07:36.452 Vendor ID: 1b36 00:07:36.452 Subsystem Vendor ID: 1af4 00:07:36.452 Serial Number: 12341 00:07:36.452 Model Number: QEMU NVMe Ctrl 00:07:36.452 Firmware Version: 8.0.0 00:07:36.452 Recommended Arb Burst: 6 00:07:36.452 IEEE OUI Identifier: 00 54 52 00:07:36.452 Multi-path I/O 00:07:36.452 May have multiple subsystem ports: No 00:07:36.452 May have multiple controllers: No 00:07:36.452 Associated with SR-IOV VF: No 00:07:36.452 Max Data Transfer Size: 524288 00:07:36.452 Max Number of Namespaces: 256 00:07:36.452 Max Number of I/O Queues: 64 00:07:36.452 NVMe Specification Version (VS): 1.4 00:07:36.452 NVMe Specification Version (Identify): 1.4 00:07:36.452 Maximum Queue Entries: 2048 00:07:36.452 Contiguous Queues Required: Yes 00:07:36.452 Arbitration Mechanisms Supported 00:07:36.452 Weighted Round Robin: Not Supported 00:07:36.452 Vendor Specific: Not Supported 00:07:36.452 Reset Timeout: 7500 ms 00:07:36.452 Doorbell Stride: 4 bytes 00:07:36.452 NVM Subsystem Reset: Not Supported 00:07:36.452 Command Sets Supported 00:07:36.452 NVM Command Set: Supported 00:07:36.452 Boot Partition: Not Supported 00:07:36.452 Memory Page Size Minimum: 4096 bytes 00:07:36.452 Memory Page Size Maximum: 65536 bytes 00:07:36.452 Persistent Memory Region: Not Supported 00:07:36.452 Optional Asynchronous Events Supported 00:07:36.453 Namespace Attribute Notices: Supported 00:07:36.453 Firmware Activation Notices: Not Supported 00:07:36.453 ANA Change Notices: Not Supported 00:07:36.453 PLE Aggregate Log Change Notices: Not Supported 00:07:36.453 LBA Status Info Alert Notices: Not Supported 00:07:36.453 EGE Aggregate Log Change Notices: Not Supported 00:07:36.453 Normal NVM Subsystem Shutdown event: Not Supported 00:07:36.453 Zone Descriptor Change Notices: Not Supported 00:07:36.453 Discovery Log Change Notices: Not Supported 00:07:36.453 Controller Attributes 00:07:36.453 128-bit Host Identifier: Not Supported 00:07:36.453 Non-Operational Permissive Mode: Not Supported 00:07:36.453 NVM Sets: Not Supported 00:07:36.453 Read Recovery Levels: Not Supported 00:07:36.453 Endurance Groups: Not Supported 00:07:36.453 Predictable Latency Mode: Not Supported 00:07:36.453 Traffic Based Keep ALive: Not Supported 00:07:36.453 Namespace Granularity: Not Supported 00:07:36.453 SQ Associations: Not Supported 00:07:36.453 UUID List: Not Supported 00:07:36.453 Multi-Domain Subsystem: Not Supported 00:07:36.453 Fixed Capacity Management: Not Supported 00:07:36.453 Variable Capacity Management: Not Supported 00:07:36.453 Delete Endurance Group: Not Supported 00:07:36.453 Delete NVM Set: Not Supported 00:07:36.453 Extended LBA Formats Supported: Supported 00:07:36.453 Flexible Data Placement Supported: Not Supported 00:07:36.453 00:07:36.453 Controller Memory Buffer Support 00:07:36.453 ================================ 00:07:36.453 Supported: No 00:07:36.453 00:07:36.453 Persistent Memory Region Support 00:07:36.453 ================================ 00:07:36.453 Supported: No 00:07:36.453 00:07:36.453 Admin Command Set Attributes 00:07:36.453 ============================ 00:07:36.453 Security Send/Receive: Not Supported 00:07:36.453 Format NVM: Supported 00:07:36.453 Firmware Activate/Download: Not Supported 00:07:36.453 Namespace Management: Supported 00:07:36.453 Device Self-Test: Not Supported 00:07:36.453 Directives: Supported 00:07:36.453 NVMe-MI: Not Supported 00:07:36.453 Virtualization Management: Not Supported 00:07:36.453 Doorbell Buffer Config: Supported 00:07:36.453 Get LBA Status Capability: Not Supported 00:07:36.453 Command & Feature Lockdown Capability: Not Supported 00:07:36.453 Abort Command Limit: 4 00:07:36.453 Async Event Request Limit: 4 00:07:36.453 Number of Firmware Slots: N/A 00:07:36.453 Firmware Slot 1 Read-Only: N/A 00:07:36.453 Firmware Activation Without Reset: N/A 00:07:36.453 Multiple Update Detection Support: N/A 00:07:36.453 Firmware Update Granularity: No Information Provided 00:07:36.453 Per-Namespace SMART Log: Yes 00:07:36.453 Asymmetric Namespace Access Log Page: Not Supported 00:07:36.453 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:36.453 Command Effects Log Page: Supported 00:07:36.453 Get Log Page Extended Data: Supported 00:07:36.453 Telemetry Log Pages: Not Supported 00:07:36.453 Persistent Event Log Pages: Not Supported 00:07:36.453 Supported Log Pages Log Page: May Support 00:07:36.453 Commands Supported & Effects Log Page: Not Supported 00:07:36.453 Feature Identifiers & Effects Log Page:May Support 00:07:36.453 NVMe-MI Commands & Effects Log Page: May Support 00:07:36.453 Data Area 4 for Telemetry Log: Not Supported 00:07:36.453 Error Log Page Entries Supported: 1 00:07:36.453 Keep Alive: Not Supported 00:07:36.453 00:07:36.453 NVM Command Set Attributes 00:07:36.453 ========================== 00:07:36.453 Submission Queue Entry Size 00:07:36.453 Max: 64 00:07:36.453 Min: 64 00:07:36.453 Completion Queue Entry Size 00:07:36.453 Max: 16 00:07:36.453 Min: 16 00:07:36.453 Number of Namespaces: 256 00:07:36.453 Compare Command: Supported 00:07:36.453 Write Uncorrectable Command: Not Supported 00:07:36.453 Dataset Management Command: Supported 00:07:36.453 Write Zeroes Command: Supported 00:07:36.453 Set Features Save Field: Supported 00:07:36.453 Reservations: Not Supported 00:07:36.453 Timestamp: Supported 00:07:36.453 Copy: Supported 00:07:36.453 Volatile Write Cache: Present 00:07:36.453 Atomic Write Unit (Normal): 1 00:07:36.453 Atomic Write Unit (PFail): 1 00:07:36.453 Atomic Compare & Write Unit: 1 00:07:36.453 Fused Compare & Write: Not Supported 00:07:36.453 Scatter-Gather List 00:07:36.453 SGL Command Set: Supported 00:07:36.453 SGL Keyed: Not Supported 00:07:36.453 SGL Bit Bucket Descriptor: Not Supported 00:07:36.453 SGL Metadata Pointer: Not Supported 00:07:36.453 Oversized SGL: Not Supported 00:07:36.453 SGL Metadata Address: Not Supported 00:07:36.453 SGL Offset: Not Supported 00:07:36.453 Transport SGL Data Block: Not Supported 00:07:36.453 Replay Protected Memory Block: Not Supported 00:07:36.453 00:07:36.453 Firmware Slot Information 00:07:36.453 ========================= 00:07:36.453 Active slot: 1 00:07:36.453 Slot 1 Firmware Revision: 1.0 00:07:36.453 00:07:36.453 00:07:36.453 Commands Supported and Effects 00:07:36.453 ============================== 00:07:36.453 Admin Commands 00:07:36.453 -------------- 00:07:36.453 Delete I/O Submission Queue (00h): Supported 00:07:36.453 Create I/O Submission Queue (01h): Supported 00:07:36.453 Get Log Page (02h): Supported 00:07:36.453 Delete I/O Completion Queue (04h): Supported 00:07:36.453 Create I/O Completion Queue (05h): Supported 00:07:36.453 Identify (06h): Supported 00:07:36.453 Abort (08h): Supported 00:07:36.453 Set Features (09h): Supported 00:07:36.453 Get Features (0Ah): Supported 00:07:36.453 Asynchronous Event Request (0Ch): Supported 00:07:36.453 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:36.453 Directive Send (19h): Supported 00:07:36.453 Directive Receive (1Ah): Supported 00:07:36.453 Virtualization Management (1Ch): Supported 00:07:36.453 Doorbell Buffer Config (7Ch): Supported 00:07:36.453 Format NVM (80h): Supported LBA-Change 00:07:36.453 I/O Commands 00:07:36.453 ------------ 00:07:36.453 Flush (00h): Supported LBA-Change 00:07:36.453 Write (01h): Supported LBA-Change 00:07:36.453 Read (02h): Supported 00:07:36.453 Compare (05h): Supported 00:07:36.453 Write Zeroes (08h): Supported LBA-Change 00:07:36.453 Dataset Management (09h): Supported LBA-Change 00:07:36.453 Unknown (0Ch): Supported 00:07:36.453 Unknown (12h): Supported 00:07:36.453 Copy (19h): Supported LBA-Change 00:07:36.453 Unknown (1Dh): Supported LBA-Change 00:07:36.453 00:07:36.453 Error Log 00:07:36.453 ========= 00:07:36.453 00:07:36.453 Arbitration 00:07:36.453 =========== 00:07:36.453 Arbitration Burst: no limit 00:07:36.453 00:07:36.453 Power Management 00:07:36.453 ================ 00:07:36.453 Number of Power States: 1 00:07:36.453 Current Power State: Power State #0 00:07:36.453 Power State #0: 00:07:36.453 Max Power: 25.00 W 00:07:36.453 Non-Operational State: Operational 00:07:36.453 Entry Latency: 16 microseconds 00:07:36.453 Exit Latency: 4 microseconds 00:07:36.453 Relative Read Throughput: 0 00:07:36.453 Relative Read Latency: 0 00:07:36.453 Relative Write Throughput: 0 00:07:36.453 Relative Write Latency: 0 00:07:36.453 Idle Power: Not Reported 00:07:36.453 Active Power: Not Reported 00:07:36.453 Non-Operational Permissive Mode: Not Supported 00:07:36.453 00:07:36.453 Health Information 00:07:36.453 ================== 00:07:36.453 Critical Warnings: 00:07:36.453 Available Spare Space: OK 00:07:36.453 Temperature: OK 00:07:36.453 Device Reliability: OK 00:07:36.453 Read Only: No 00:07:36.453 Volatile Memory Backup: OK 00:07:36.453 Current Temperature: 323 Kelvin (50 Celsius) 00:07:36.453 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:36.453 Available Spare: 0% 00:07:36.453 Available Spare Threshold: 0% 00:07:36.453 Life Percentage Used: 0% 00:07:36.453 Data Units Read: 954 00:07:36.453 Data Units Written: 827 00:07:36.453 Host Read Commands: 45372 00:07:36.453 Host Write Commands: 44270 00:07:36.453 Controller Busy Time: 0 minutes 00:07:36.453 Power Cycles: 0 00:07:36.453 Power On Hours: 0 hours 00:07:36.453 Unsafe Shutdowns: 0 00:07:36.453 Unrecoverable Media Errors: 0 00:07:36.453 Lifetime Error Log Entries: 0 00:07:36.453 Warning Temperature Time: 0 minutes 00:07:36.453 Critical Temperature Time: 0 minutes 00:07:36.453 00:07:36.453 Number of Queues 00:07:36.453 ================ 00:07:36.453 Number of I/O Submission Queues: 64 00:07:36.453 Number of I/O Completion Queues: 64 00:07:36.453 00:07:36.453 ZNS Specific Controller Data 00:07:36.453 ============================ 00:07:36.453 Zone Append Size Limit: 0 00:07:36.453 00:07:36.453 00:07:36.453 Active Namespaces 00:07:36.453 ================= 00:07:36.453 Namespace ID:1 00:07:36.453 Error Recovery Timeout: Unlimited 00:07:36.453 Command Set Identifier: NVM (00h) 00:07:36.454 Deallocate: Supported 00:07:36.454 Deallocated/Unwritten Error: Supported 00:07:36.454 Deallocated Read Value: All 0x00 00:07:36.454 Deallocate in Write Zeroes: Not Supported 00:07:36.454 Deallocated Guard Field: 0xFFFF 00:07:36.454 Flush: Supported 00:07:36.454 Reservation: Not Supported 00:07:36.454 Namespace Sharing Capabilities: Private 00:07:36.454 Size (in LBAs): 1310720 (5GiB) 00:07:36.454 Capacity (in LBAs): 1310720 (5GiB) 00:07:36.454 Utilization (in LBAs): 1310720 (5GiB) 00:07:36.454 Thin Provisioning: Not Supported 00:07:36.454 Per-NS Atomic Units: No 00:07:36.454 Maximum Single Source Range Length: 128 00:07:36.454 Maximum Copy Length: 128 00:07:36.454 Maximum Source Range Count: 128 00:07:36.454 NGUID/EUI64 Never Reused: No 00:07:36.454 Namespace Write Protected: No 00:07:36.454 Number of LBA Formats: 8 00:07:36.454 Current LBA Format: LBA Format #04 00:07:36.454 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:36.454 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:36.454 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:36.454 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:36.454 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:36.454 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:36.454 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:36.454 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:36.454 00:07:36.454 NVM Specific Namespace Data 00:07:36.454 =========================== 00:07:36.454 Logical Block Storage Tag Mask: 0 00:07:36.454 Protection Information Capabilities: 00:07:36.454 16b Guard Protection Information Storage Tag Support: No 00:07:36.454 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:36.454 Storage Tag Check Read Support: No 00:07:36.454 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.454 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.454 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.454 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.454 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.454 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.454 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.454 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.454 12:05:37 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:36.454 12:05:37 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:07:36.713 ===================================================== 00:07:36.713 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:36.713 ===================================================== 00:07:36.713 Controller Capabilities/Features 00:07:36.713 ================================ 00:07:36.713 Vendor ID: 1b36 00:07:36.713 Subsystem Vendor ID: 1af4 00:07:36.713 Serial Number: 12342 00:07:36.713 Model Number: QEMU NVMe Ctrl 00:07:36.713 Firmware Version: 8.0.0 00:07:36.713 Recommended Arb Burst: 6 00:07:36.713 IEEE OUI Identifier: 00 54 52 00:07:36.713 Multi-path I/O 00:07:36.713 May have multiple subsystem ports: No 00:07:36.713 May have multiple controllers: No 00:07:36.713 Associated with SR-IOV VF: No 00:07:36.713 Max Data Transfer Size: 524288 00:07:36.713 Max Number of Namespaces: 256 00:07:36.713 Max Number of I/O Queues: 64 00:07:36.713 NVMe Specification Version (VS): 1.4 00:07:36.713 NVMe Specification Version (Identify): 1.4 00:07:36.713 Maximum Queue Entries: 2048 00:07:36.713 Contiguous Queues Required: Yes 00:07:36.713 Arbitration Mechanisms Supported 00:07:36.713 Weighted Round Robin: Not Supported 00:07:36.713 Vendor Specific: Not Supported 00:07:36.713 Reset Timeout: 7500 ms 00:07:36.713 Doorbell Stride: 4 bytes 00:07:36.713 NVM Subsystem Reset: Not Supported 00:07:36.713 Command Sets Supported 00:07:36.713 NVM Command Set: Supported 00:07:36.713 Boot Partition: Not Supported 00:07:36.713 Memory Page Size Minimum: 4096 bytes 00:07:36.713 Memory Page Size Maximum: 65536 bytes 00:07:36.713 Persistent Memory Region: Not Supported 00:07:36.713 Optional Asynchronous Events Supported 00:07:36.713 Namespace Attribute Notices: Supported 00:07:36.713 Firmware Activation Notices: Not Supported 00:07:36.713 ANA Change Notices: Not Supported 00:07:36.713 PLE Aggregate Log Change Notices: Not Supported 00:07:36.713 LBA Status Info Alert Notices: Not Supported 00:07:36.713 EGE Aggregate Log Change Notices: Not Supported 00:07:36.713 Normal NVM Subsystem Shutdown event: Not Supported 00:07:36.713 Zone Descriptor Change Notices: Not Supported 00:07:36.713 Discovery Log Change Notices: Not Supported 00:07:36.713 Controller Attributes 00:07:36.713 128-bit Host Identifier: Not Supported 00:07:36.713 Non-Operational Permissive Mode: Not Supported 00:07:36.713 NVM Sets: Not Supported 00:07:36.713 Read Recovery Levels: Not Supported 00:07:36.713 Endurance Groups: Not Supported 00:07:36.713 Predictable Latency Mode: Not Supported 00:07:36.713 Traffic Based Keep ALive: Not Supported 00:07:36.713 Namespace Granularity: Not Supported 00:07:36.713 SQ Associations: Not Supported 00:07:36.713 UUID List: Not Supported 00:07:36.713 Multi-Domain Subsystem: Not Supported 00:07:36.713 Fixed Capacity Management: Not Supported 00:07:36.713 Variable Capacity Management: Not Supported 00:07:36.713 Delete Endurance Group: Not Supported 00:07:36.713 Delete NVM Set: Not Supported 00:07:36.713 Extended LBA Formats Supported: Supported 00:07:36.713 Flexible Data Placement Supported: Not Supported 00:07:36.713 00:07:36.713 Controller Memory Buffer Support 00:07:36.713 ================================ 00:07:36.713 Supported: No 00:07:36.713 00:07:36.713 Persistent Memory Region Support 00:07:36.713 ================================ 00:07:36.713 Supported: No 00:07:36.713 00:07:36.713 Admin Command Set Attributes 00:07:36.713 ============================ 00:07:36.713 Security Send/Receive: Not Supported 00:07:36.713 Format NVM: Supported 00:07:36.713 Firmware Activate/Download: Not Supported 00:07:36.713 Namespace Management: Supported 00:07:36.713 Device Self-Test: Not Supported 00:07:36.713 Directives: Supported 00:07:36.713 NVMe-MI: Not Supported 00:07:36.713 Virtualization Management: Not Supported 00:07:36.713 Doorbell Buffer Config: Supported 00:07:36.713 Get LBA Status Capability: Not Supported 00:07:36.713 Command & Feature Lockdown Capability: Not Supported 00:07:36.713 Abort Command Limit: 4 00:07:36.713 Async Event Request Limit: 4 00:07:36.713 Number of Firmware Slots: N/A 00:07:36.713 Firmware Slot 1 Read-Only: N/A 00:07:36.713 Firmware Activation Without Reset: N/A 00:07:36.713 Multiple Update Detection Support: N/A 00:07:36.713 Firmware Update Granularity: No Information Provided 00:07:36.713 Per-Namespace SMART Log: Yes 00:07:36.713 Asymmetric Namespace Access Log Page: Not Supported 00:07:36.713 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:36.713 Command Effects Log Page: Supported 00:07:36.713 Get Log Page Extended Data: Supported 00:07:36.713 Telemetry Log Pages: Not Supported 00:07:36.713 Persistent Event Log Pages: Not Supported 00:07:36.713 Supported Log Pages Log Page: May Support 00:07:36.713 Commands Supported & Effects Log Page: Not Supported 00:07:36.713 Feature Identifiers & Effects Log Page:May Support 00:07:36.713 NVMe-MI Commands & Effects Log Page: May Support 00:07:36.713 Data Area 4 for Telemetry Log: Not Supported 00:07:36.713 Error Log Page Entries Supported: 1 00:07:36.713 Keep Alive: Not Supported 00:07:36.713 00:07:36.713 NVM Command Set Attributes 00:07:36.713 ========================== 00:07:36.713 Submission Queue Entry Size 00:07:36.713 Max: 64 00:07:36.713 Min: 64 00:07:36.713 Completion Queue Entry Size 00:07:36.713 Max: 16 00:07:36.713 Min: 16 00:07:36.713 Number of Namespaces: 256 00:07:36.713 Compare Command: Supported 00:07:36.713 Write Uncorrectable Command: Not Supported 00:07:36.713 Dataset Management Command: Supported 00:07:36.713 Write Zeroes Command: Supported 00:07:36.713 Set Features Save Field: Supported 00:07:36.713 Reservations: Not Supported 00:07:36.713 Timestamp: Supported 00:07:36.713 Copy: Supported 00:07:36.713 Volatile Write Cache: Present 00:07:36.713 Atomic Write Unit (Normal): 1 00:07:36.713 Atomic Write Unit (PFail): 1 00:07:36.713 Atomic Compare & Write Unit: 1 00:07:36.713 Fused Compare & Write: Not Supported 00:07:36.713 Scatter-Gather List 00:07:36.713 SGL Command Set: Supported 00:07:36.713 SGL Keyed: Not Supported 00:07:36.713 SGL Bit Bucket Descriptor: Not Supported 00:07:36.713 SGL Metadata Pointer: Not Supported 00:07:36.713 Oversized SGL: Not Supported 00:07:36.713 SGL Metadata Address: Not Supported 00:07:36.713 SGL Offset: Not Supported 00:07:36.713 Transport SGL Data Block: Not Supported 00:07:36.713 Replay Protected Memory Block: Not Supported 00:07:36.713 00:07:36.713 Firmware Slot Information 00:07:36.713 ========================= 00:07:36.713 Active slot: 1 00:07:36.713 Slot 1 Firmware Revision: 1.0 00:07:36.713 00:07:36.713 00:07:36.713 Commands Supported and Effects 00:07:36.713 ============================== 00:07:36.713 Admin Commands 00:07:36.713 -------------- 00:07:36.713 Delete I/O Submission Queue (00h): Supported 00:07:36.713 Create I/O Submission Queue (01h): Supported 00:07:36.713 Get Log Page (02h): Supported 00:07:36.713 Delete I/O Completion Queue (04h): Supported 00:07:36.713 Create I/O Completion Queue (05h): Supported 00:07:36.713 Identify (06h): Supported 00:07:36.713 Abort (08h): Supported 00:07:36.713 Set Features (09h): Supported 00:07:36.713 Get Features (0Ah): Supported 00:07:36.713 Asynchronous Event Request (0Ch): Supported 00:07:36.713 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:36.713 Directive Send (19h): Supported 00:07:36.713 Directive Receive (1Ah): Supported 00:07:36.713 Virtualization Management (1Ch): Supported 00:07:36.713 Doorbell Buffer Config (7Ch): Supported 00:07:36.713 Format NVM (80h): Supported LBA-Change 00:07:36.713 I/O Commands 00:07:36.713 ------------ 00:07:36.713 Flush (00h): Supported LBA-Change 00:07:36.713 Write (01h): Supported LBA-Change 00:07:36.713 Read (02h): Supported 00:07:36.713 Compare (05h): Supported 00:07:36.713 Write Zeroes (08h): Supported LBA-Change 00:07:36.713 Dataset Management (09h): Supported LBA-Change 00:07:36.713 Unknown (0Ch): Supported 00:07:36.713 Unknown (12h): Supported 00:07:36.713 Copy (19h): Supported LBA-Change 00:07:36.714 Unknown (1Dh): Supported LBA-Change 00:07:36.714 00:07:36.714 Error Log 00:07:36.714 ========= 00:07:36.714 00:07:36.714 Arbitration 00:07:36.714 =========== 00:07:36.714 Arbitration Burst: no limit 00:07:36.714 00:07:36.714 Power Management 00:07:36.714 ================ 00:07:36.714 Number of Power States: 1 00:07:36.714 Current Power State: Power State #0 00:07:36.714 Power State #0: 00:07:36.714 Max Power: 25.00 W 00:07:36.714 Non-Operational State: Operational 00:07:36.714 Entry Latency: 16 microseconds 00:07:36.714 Exit Latency: 4 microseconds 00:07:36.714 Relative Read Throughput: 0 00:07:36.714 Relative Read Latency: 0 00:07:36.714 Relative Write Throughput: 0 00:07:36.714 Relative Write Latency: 0 00:07:36.714 Idle Power: Not Reported 00:07:36.714 Active Power: Not Reported 00:07:36.714 Non-Operational Permissive Mode: Not Supported 00:07:36.714 00:07:36.714 Health Information 00:07:36.714 ================== 00:07:36.714 Critical Warnings: 00:07:36.714 Available Spare Space: OK 00:07:36.714 Temperature: OK 00:07:36.714 Device Reliability: OK 00:07:36.714 Read Only: No 00:07:36.714 Volatile Memory Backup: OK 00:07:36.714 Current Temperature: 323 Kelvin (50 Celsius) 00:07:36.714 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:36.714 Available Spare: 0% 00:07:36.714 Available Spare Threshold: 0% 00:07:36.714 Life Percentage Used: 0% 00:07:36.714 Data Units Read: 1993 00:07:36.714 Data Units Written: 1780 00:07:36.714 Host Read Commands: 93331 00:07:36.714 Host Write Commands: 91600 00:07:36.714 Controller Busy Time: 0 minutes 00:07:36.714 Power Cycles: 0 00:07:36.714 Power On Hours: 0 hours 00:07:36.714 Unsafe Shutdowns: 0 00:07:36.714 Unrecoverable Media Errors: 0 00:07:36.714 Lifetime Error Log Entries: 0 00:07:36.714 Warning Temperature Time: 0 minutes 00:07:36.714 Critical Temperature Time: 0 minutes 00:07:36.714 00:07:36.714 Number of Queues 00:07:36.714 ================ 00:07:36.714 Number of I/O Submission Queues: 64 00:07:36.714 Number of I/O Completion Queues: 64 00:07:36.714 00:07:36.714 ZNS Specific Controller Data 00:07:36.714 ============================ 00:07:36.714 Zone Append Size Limit: 0 00:07:36.714 00:07:36.714 00:07:36.714 Active Namespaces 00:07:36.714 ================= 00:07:36.714 Namespace ID:1 00:07:36.714 Error Recovery Timeout: Unlimited 00:07:36.714 Command Set Identifier: NVM (00h) 00:07:36.714 Deallocate: Supported 00:07:36.714 Deallocated/Unwritten Error: Supported 00:07:36.714 Deallocated Read Value: All 0x00 00:07:36.714 Deallocate in Write Zeroes: Not Supported 00:07:36.714 Deallocated Guard Field: 0xFFFF 00:07:36.714 Flush: Supported 00:07:36.714 Reservation: Not Supported 00:07:36.714 Namespace Sharing Capabilities: Private 00:07:36.714 Size (in LBAs): 1048576 (4GiB) 00:07:36.714 Capacity (in LBAs): 1048576 (4GiB) 00:07:36.714 Utilization (in LBAs): 1048576 (4GiB) 00:07:36.714 Thin Provisioning: Not Supported 00:07:36.714 Per-NS Atomic Units: No 00:07:36.714 Maximum Single Source Range Length: 128 00:07:36.714 Maximum Copy Length: 128 00:07:36.714 Maximum Source Range Count: 128 00:07:36.714 NGUID/EUI64 Never Reused: No 00:07:36.714 Namespace Write Protected: No 00:07:36.714 Number of LBA Formats: 8 00:07:36.714 Current LBA Format: LBA Format #04 00:07:36.714 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:36.714 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:36.714 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:36.714 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:36.714 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:36.714 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:36.714 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:36.714 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:36.714 00:07:36.714 NVM Specific Namespace Data 00:07:36.714 =========================== 00:07:36.714 Logical Block Storage Tag Mask: 0 00:07:36.714 Protection Information Capabilities: 00:07:36.714 16b Guard Protection Information Storage Tag Support: No 00:07:36.714 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:36.714 Storage Tag Check Read Support: No 00:07:36.714 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.714 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.714 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.714 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.714 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.714 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.714 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.714 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.714 Namespace ID:2 00:07:36.714 Error Recovery Timeout: Unlimited 00:07:36.714 Command Set Identifier: NVM (00h) 00:07:36.714 Deallocate: Supported 00:07:36.714 Deallocated/Unwritten Error: Supported 00:07:36.714 Deallocated Read Value: All 0x00 00:07:36.714 Deallocate in Write Zeroes: Not Supported 00:07:36.714 Deallocated Guard Field: 0xFFFF 00:07:36.714 Flush: Supported 00:07:36.714 Reservation: Not Supported 00:07:36.714 Namespace Sharing Capabilities: Private 00:07:36.714 Size (in LBAs): 1048576 (4GiB) 00:07:36.714 Capacity (in LBAs): 1048576 (4GiB) 00:07:36.714 Utilization (in LBAs): 1048576 (4GiB) 00:07:36.714 Thin Provisioning: Not Supported 00:07:36.714 Per-NS Atomic Units: No 00:07:36.714 Maximum Single Source Range Length: 128 00:07:36.714 Maximum Copy Length: 128 00:07:36.714 Maximum Source Range Count: 128 00:07:36.714 NGUID/EUI64 Never Reused: No 00:07:36.714 Namespace Write Protected: No 00:07:36.714 Number of LBA Formats: 8 00:07:36.714 Current LBA Format: LBA Format #04 00:07:36.714 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:36.714 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:36.714 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:36.714 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:36.714 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:36.714 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:36.714 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:36.714 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:36.714 00:07:36.714 NVM Specific Namespace Data 00:07:36.714 =========================== 00:07:36.714 Logical Block Storage Tag Mask: 0 00:07:36.714 Protection Information Capabilities: 00:07:36.714 16b Guard Protection Information Storage Tag Support: No 00:07:36.714 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:36.714 Storage Tag Check Read Support: No 00:07:36.714 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.714 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.714 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.714 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.714 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.714 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.714 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.714 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.714 Namespace ID:3 00:07:36.714 Error Recovery Timeout: Unlimited 00:07:36.714 Command Set Identifier: NVM (00h) 00:07:36.714 Deallocate: Supported 00:07:36.714 Deallocated/Unwritten Error: Supported 00:07:36.714 Deallocated Read Value: All 0x00 00:07:36.714 Deallocate in Write Zeroes: Not Supported 00:07:36.714 Deallocated Guard Field: 0xFFFF 00:07:36.714 Flush: Supported 00:07:36.714 Reservation: Not Supported 00:07:36.714 Namespace Sharing Capabilities: Private 00:07:36.714 Size (in LBAs): 1048576 (4GiB) 00:07:36.714 Capacity (in LBAs): 1048576 (4GiB) 00:07:36.714 Utilization (in LBAs): 1048576 (4GiB) 00:07:36.714 Thin Provisioning: Not Supported 00:07:36.714 Per-NS Atomic Units: No 00:07:36.714 Maximum Single Source Range Length: 128 00:07:36.714 Maximum Copy Length: 128 00:07:36.714 Maximum Source Range Count: 128 00:07:36.714 NGUID/EUI64 Never Reused: No 00:07:36.714 Namespace Write Protected: No 00:07:36.714 Number of LBA Formats: 8 00:07:36.714 Current LBA Format: LBA Format #04 00:07:36.714 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:36.714 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:36.714 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:36.714 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:36.714 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:36.714 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:36.714 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:36.714 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:36.714 00:07:36.714 NVM Specific Namespace Data 00:07:36.714 =========================== 00:07:36.714 Logical Block Storage Tag Mask: 0 00:07:36.714 Protection Information Capabilities: 00:07:36.715 16b Guard Protection Information Storage Tag Support: No 00:07:36.715 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:36.715 Storage Tag Check Read Support: No 00:07:36.715 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.715 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.715 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.715 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.715 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.715 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.715 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.715 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.715 12:05:37 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:36.715 12:05:37 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:07:36.973 ===================================================== 00:07:36.973 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:36.973 ===================================================== 00:07:36.973 Controller Capabilities/Features 00:07:36.973 ================================ 00:07:36.973 Vendor ID: 1b36 00:07:36.973 Subsystem Vendor ID: 1af4 00:07:36.973 Serial Number: 12343 00:07:36.973 Model Number: QEMU NVMe Ctrl 00:07:36.973 Firmware Version: 8.0.0 00:07:36.973 Recommended Arb Burst: 6 00:07:36.973 IEEE OUI Identifier: 00 54 52 00:07:36.973 Multi-path I/O 00:07:36.974 May have multiple subsystem ports: No 00:07:36.974 May have multiple controllers: Yes 00:07:36.974 Associated with SR-IOV VF: No 00:07:36.974 Max Data Transfer Size: 524288 00:07:36.974 Max Number of Namespaces: 256 00:07:36.974 Max Number of I/O Queues: 64 00:07:36.974 NVMe Specification Version (VS): 1.4 00:07:36.974 NVMe Specification Version (Identify): 1.4 00:07:36.974 Maximum Queue Entries: 2048 00:07:36.974 Contiguous Queues Required: Yes 00:07:36.974 Arbitration Mechanisms Supported 00:07:36.974 Weighted Round Robin: Not Supported 00:07:36.974 Vendor Specific: Not Supported 00:07:36.974 Reset Timeout: 7500 ms 00:07:36.974 Doorbell Stride: 4 bytes 00:07:36.974 NVM Subsystem Reset: Not Supported 00:07:36.974 Command Sets Supported 00:07:36.974 NVM Command Set: Supported 00:07:36.974 Boot Partition: Not Supported 00:07:36.974 Memory Page Size Minimum: 4096 bytes 00:07:36.974 Memory Page Size Maximum: 65536 bytes 00:07:36.974 Persistent Memory Region: Not Supported 00:07:36.974 Optional Asynchronous Events Supported 00:07:36.974 Namespace Attribute Notices: Supported 00:07:36.974 Firmware Activation Notices: Not Supported 00:07:36.974 ANA Change Notices: Not Supported 00:07:36.974 PLE Aggregate Log Change Notices: Not Supported 00:07:36.974 LBA Status Info Alert Notices: Not Supported 00:07:36.974 EGE Aggregate Log Change Notices: Not Supported 00:07:36.974 Normal NVM Subsystem Shutdown event: Not Supported 00:07:36.974 Zone Descriptor Change Notices: Not Supported 00:07:36.974 Discovery Log Change Notices: Not Supported 00:07:36.974 Controller Attributes 00:07:36.974 128-bit Host Identifier: Not Supported 00:07:36.974 Non-Operational Permissive Mode: Not Supported 00:07:36.974 NVM Sets: Not Supported 00:07:36.974 Read Recovery Levels: Not Supported 00:07:36.974 Endurance Groups: Supported 00:07:36.974 Predictable Latency Mode: Not Supported 00:07:36.974 Traffic Based Keep ALive: Not Supported 00:07:36.974 Namespace Granularity: Not Supported 00:07:36.974 SQ Associations: Not Supported 00:07:36.974 UUID List: Not Supported 00:07:36.974 Multi-Domain Subsystem: Not Supported 00:07:36.974 Fixed Capacity Management: Not Supported 00:07:36.974 Variable Capacity Management: Not Supported 00:07:36.974 Delete Endurance Group: Not Supported 00:07:36.974 Delete NVM Set: Not Supported 00:07:36.974 Extended LBA Formats Supported: Supported 00:07:36.974 Flexible Data Placement Supported: Supported 00:07:36.974 00:07:36.974 Controller Memory Buffer Support 00:07:36.974 ================================ 00:07:36.974 Supported: No 00:07:36.974 00:07:36.974 Persistent Memory Region Support 00:07:36.974 ================================ 00:07:36.974 Supported: No 00:07:36.974 00:07:36.974 Admin Command Set Attributes 00:07:36.974 ============================ 00:07:36.974 Security Send/Receive: Not Supported 00:07:36.974 Format NVM: Supported 00:07:36.974 Firmware Activate/Download: Not Supported 00:07:36.974 Namespace Management: Supported 00:07:36.974 Device Self-Test: Not Supported 00:07:36.974 Directives: Supported 00:07:36.974 NVMe-MI: Not Supported 00:07:36.974 Virtualization Management: Not Supported 00:07:36.974 Doorbell Buffer Config: Supported 00:07:36.974 Get LBA Status Capability: Not Supported 00:07:36.974 Command & Feature Lockdown Capability: Not Supported 00:07:36.974 Abort Command Limit: 4 00:07:36.974 Async Event Request Limit: 4 00:07:36.974 Number of Firmware Slots: N/A 00:07:36.974 Firmware Slot 1 Read-Only: N/A 00:07:36.974 Firmware Activation Without Reset: N/A 00:07:36.974 Multiple Update Detection Support: N/A 00:07:36.974 Firmware Update Granularity: No Information Provided 00:07:36.974 Per-Namespace SMART Log: Yes 00:07:36.974 Asymmetric Namespace Access Log Page: Not Supported 00:07:36.974 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:36.974 Command Effects Log Page: Supported 00:07:36.974 Get Log Page Extended Data: Supported 00:07:36.974 Telemetry Log Pages: Not Supported 00:07:36.974 Persistent Event Log Pages: Not Supported 00:07:36.974 Supported Log Pages Log Page: May Support 00:07:36.974 Commands Supported & Effects Log Page: Not Supported 00:07:36.974 Feature Identifiers & Effects Log Page:May Support 00:07:36.974 NVMe-MI Commands & Effects Log Page: May Support 00:07:36.974 Data Area 4 for Telemetry Log: Not Supported 00:07:36.974 Error Log Page Entries Supported: 1 00:07:36.974 Keep Alive: Not Supported 00:07:36.974 00:07:36.974 NVM Command Set Attributes 00:07:36.974 ========================== 00:07:36.974 Submission Queue Entry Size 00:07:36.974 Max: 64 00:07:36.974 Min: 64 00:07:36.974 Completion Queue Entry Size 00:07:36.974 Max: 16 00:07:36.974 Min: 16 00:07:36.974 Number of Namespaces: 256 00:07:36.974 Compare Command: Supported 00:07:36.974 Write Uncorrectable Command: Not Supported 00:07:36.974 Dataset Management Command: Supported 00:07:36.974 Write Zeroes Command: Supported 00:07:36.974 Set Features Save Field: Supported 00:07:36.974 Reservations: Not Supported 00:07:36.974 Timestamp: Supported 00:07:36.974 Copy: Supported 00:07:36.974 Volatile Write Cache: Present 00:07:36.974 Atomic Write Unit (Normal): 1 00:07:36.974 Atomic Write Unit (PFail): 1 00:07:36.974 Atomic Compare & Write Unit: 1 00:07:36.974 Fused Compare & Write: Not Supported 00:07:36.974 Scatter-Gather List 00:07:36.974 SGL Command Set: Supported 00:07:36.974 SGL Keyed: Not Supported 00:07:36.974 SGL Bit Bucket Descriptor: Not Supported 00:07:36.974 SGL Metadata Pointer: Not Supported 00:07:36.974 Oversized SGL: Not Supported 00:07:36.974 SGL Metadata Address: Not Supported 00:07:36.974 SGL Offset: Not Supported 00:07:36.974 Transport SGL Data Block: Not Supported 00:07:36.974 Replay Protected Memory Block: Not Supported 00:07:36.974 00:07:36.974 Firmware Slot Information 00:07:36.974 ========================= 00:07:36.974 Active slot: 1 00:07:36.974 Slot 1 Firmware Revision: 1.0 00:07:36.974 00:07:36.974 00:07:36.974 Commands Supported and Effects 00:07:36.974 ============================== 00:07:36.974 Admin Commands 00:07:36.974 -------------- 00:07:36.974 Delete I/O Submission Queue (00h): Supported 00:07:36.974 Create I/O Submission Queue (01h): Supported 00:07:36.974 Get Log Page (02h): Supported 00:07:36.974 Delete I/O Completion Queue (04h): Supported 00:07:36.974 Create I/O Completion Queue (05h): Supported 00:07:36.975 Identify (06h): Supported 00:07:36.975 Abort (08h): Supported 00:07:36.975 Set Features (09h): Supported 00:07:36.975 Get Features (0Ah): Supported 00:07:36.975 Asynchronous Event Request (0Ch): Supported 00:07:36.975 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:36.975 Directive Send (19h): Supported 00:07:36.975 Directive Receive (1Ah): Supported 00:07:36.975 Virtualization Management (1Ch): Supported 00:07:36.975 Doorbell Buffer Config (7Ch): Supported 00:07:36.975 Format NVM (80h): Supported LBA-Change 00:07:36.975 I/O Commands 00:07:36.975 ------------ 00:07:36.975 Flush (00h): Supported LBA-Change 00:07:36.975 Write (01h): Supported LBA-Change 00:07:36.975 Read (02h): Supported 00:07:36.975 Compare (05h): Supported 00:07:36.975 Write Zeroes (08h): Supported LBA-Change 00:07:36.975 Dataset Management (09h): Supported LBA-Change 00:07:36.975 Unknown (0Ch): Supported 00:07:36.975 Unknown (12h): Supported 00:07:36.975 Copy (19h): Supported LBA-Change 00:07:36.975 Unknown (1Dh): Supported LBA-Change 00:07:36.975 00:07:36.975 Error Log 00:07:36.975 ========= 00:07:36.975 00:07:36.975 Arbitration 00:07:36.975 =========== 00:07:36.975 Arbitration Burst: no limit 00:07:36.975 00:07:36.975 Power Management 00:07:36.975 ================ 00:07:36.975 Number of Power States: 1 00:07:36.975 Current Power State: Power State #0 00:07:36.975 Power State #0: 00:07:36.975 Max Power: 25.00 W 00:07:36.975 Non-Operational State: Operational 00:07:36.975 Entry Latency: 16 microseconds 00:07:36.975 Exit Latency: 4 microseconds 00:07:36.975 Relative Read Throughput: 0 00:07:36.975 Relative Read Latency: 0 00:07:36.975 Relative Write Throughput: 0 00:07:36.975 Relative Write Latency: 0 00:07:36.975 Idle Power: Not Reported 00:07:36.975 Active Power: Not Reported 00:07:36.975 Non-Operational Permissive Mode: Not Supported 00:07:36.975 00:07:36.975 Health Information 00:07:36.975 ================== 00:07:36.975 Critical Warnings: 00:07:36.975 Available Spare Space: OK 00:07:36.975 Temperature: OK 00:07:36.975 Device Reliability: OK 00:07:36.975 Read Only: No 00:07:36.975 Volatile Memory Backup: OK 00:07:36.975 Current Temperature: 323 Kelvin (50 Celsius) 00:07:36.975 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:36.975 Available Spare: 0% 00:07:36.975 Available Spare Threshold: 0% 00:07:36.975 Life Percentage Used: 0% 00:07:36.975 Data Units Read: 716 00:07:36.975 Data Units Written: 645 00:07:36.975 Host Read Commands: 31592 00:07:36.975 Host Write Commands: 31015 00:07:36.975 Controller Busy Time: 0 minutes 00:07:36.975 Power Cycles: 0 00:07:36.975 Power On Hours: 0 hours 00:07:36.975 Unsafe Shutdowns: 0 00:07:36.975 Unrecoverable Media Errors: 0 00:07:36.975 Lifetime Error Log Entries: 0 00:07:36.975 Warning Temperature Time: 0 minutes 00:07:36.975 Critical Temperature Time: 0 minutes 00:07:36.975 00:07:36.975 Number of Queues 00:07:36.975 ================ 00:07:36.975 Number of I/O Submission Queues: 64 00:07:36.975 Number of I/O Completion Queues: 64 00:07:36.975 00:07:36.975 ZNS Specific Controller Data 00:07:36.975 ============================ 00:07:36.975 Zone Append Size Limit: 0 00:07:36.975 00:07:36.975 00:07:36.975 Active Namespaces 00:07:36.975 ================= 00:07:36.975 Namespace ID:1 00:07:36.975 Error Recovery Timeout: Unlimited 00:07:36.975 Command Set Identifier: NVM (00h) 00:07:36.975 Deallocate: Supported 00:07:36.975 Deallocated/Unwritten Error: Supported 00:07:36.975 Deallocated Read Value: All 0x00 00:07:36.975 Deallocate in Write Zeroes: Not Supported 00:07:36.975 Deallocated Guard Field: 0xFFFF 00:07:36.975 Flush: Supported 00:07:36.975 Reservation: Not Supported 00:07:36.975 Namespace Sharing Capabilities: Multiple Controllers 00:07:36.975 Size (in LBAs): 262144 (1GiB) 00:07:36.975 Capacity (in LBAs): 262144 (1GiB) 00:07:36.975 Utilization (in LBAs): 262144 (1GiB) 00:07:36.975 Thin Provisioning: Not Supported 00:07:36.975 Per-NS Atomic Units: No 00:07:36.975 Maximum Single Source Range Length: 128 00:07:36.975 Maximum Copy Length: 128 00:07:36.975 Maximum Source Range Count: 128 00:07:36.975 NGUID/EUI64 Never Reused: No 00:07:36.975 Namespace Write Protected: No 00:07:36.975 Endurance group ID: 1 00:07:36.975 Number of LBA Formats: 8 00:07:36.975 Current LBA Format: LBA Format #04 00:07:36.975 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:36.975 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:36.975 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:36.975 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:36.975 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:36.975 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:36.975 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:36.975 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:36.975 00:07:36.975 Get Feature FDP: 00:07:36.975 ================ 00:07:36.975 Enabled: Yes 00:07:36.975 FDP configuration index: 0 00:07:36.975 00:07:36.975 FDP configurations log page 00:07:36.975 =========================== 00:07:36.975 Number of FDP configurations: 1 00:07:36.975 Version: 0 00:07:36.975 Size: 112 00:07:36.975 FDP Configuration Descriptor: 0 00:07:36.975 Descriptor Size: 96 00:07:36.975 Reclaim Group Identifier format: 2 00:07:36.975 FDP Volatile Write Cache: Not Present 00:07:36.975 FDP Configuration: Valid 00:07:36.975 Vendor Specific Size: 0 00:07:36.975 Number of Reclaim Groups: 2 00:07:36.975 Number of Recalim Unit Handles: 8 00:07:36.975 Max Placement Identifiers: 128 00:07:36.975 Number of Namespaces Suppprted: 256 00:07:36.975 Reclaim unit Nominal Size: 6000000 bytes 00:07:36.975 Estimated Reclaim Unit Time Limit: Not Reported 00:07:36.975 RUH Desc #000: RUH Type: Initially Isolated 00:07:36.975 RUH Desc #001: RUH Type: Initially Isolated 00:07:36.975 RUH Desc #002: RUH Type: Initially Isolated 00:07:36.975 RUH Desc #003: RUH Type: Initially Isolated 00:07:36.975 RUH Desc #004: RUH Type: Initially Isolated 00:07:36.975 RUH Desc #005: RUH Type: Initially Isolated 00:07:36.975 RUH Desc #006: RUH Type: Initially Isolated 00:07:36.975 RUH Desc #007: RUH Type: Initially Isolated 00:07:36.975 00:07:36.975 FDP reclaim unit handle usage log page 00:07:36.975 ====================================== 00:07:36.975 Number of Reclaim Unit Handles: 8 00:07:36.975 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:36.975 RUH Usage Desc #001: RUH Attributes: Unused 00:07:36.975 RUH Usage Desc #002: RUH Attributes: Unused 00:07:36.975 RUH Usage Desc #003: RUH Attributes: Unused 00:07:36.975 RUH Usage Desc #004: RUH Attributes: Unused 00:07:36.975 RUH Usage Desc #005: RUH Attributes: Unused 00:07:36.975 RUH Usage Desc #006: RUH Attributes: Unused 00:07:36.975 RUH Usage Desc #007: RUH Attributes: Unused 00:07:36.976 00:07:36.976 FDP statistics log page 00:07:36.976 ======================= 00:07:36.976 Host bytes with metadata written: 413704192 00:07:36.976 Media bytes with metadata written: 413749248 00:07:36.976 Media bytes erased: 0 00:07:36.976 00:07:36.976 FDP events log page 00:07:36.976 =================== 00:07:36.976 Number of FDP events: 0 00:07:36.976 00:07:36.976 NVM Specific Namespace Data 00:07:36.976 =========================== 00:07:36.976 Logical Block Storage Tag Mask: 0 00:07:36.976 Protection Information Capabilities: 00:07:36.976 16b Guard Protection Information Storage Tag Support: No 00:07:36.976 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:36.976 Storage Tag Check Read Support: No 00:07:36.976 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.976 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.976 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.976 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.976 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.976 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.976 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.976 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:36.976 00:07:36.976 real 0m1.312s 00:07:36.976 user 0m0.493s 00:07:36.976 sys 0m0.583s 00:07:36.976 12:05:37 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.976 12:05:37 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:07:36.976 ************************************ 00:07:36.976 END TEST nvme_identify 00:07:36.976 ************************************ 00:07:36.976 12:05:37 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:07:36.976 12:05:37 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:36.976 12:05:37 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.976 12:05:37 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:36.976 ************************************ 00:07:36.976 START TEST nvme_perf 00:07:36.976 ************************************ 00:07:36.976 12:05:37 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:07:36.976 12:05:37 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:07:38.353 Initializing NVMe Controllers 00:07:38.353 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:38.353 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:38.353 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:38.354 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:38.354 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:38.354 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:38.354 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:38.354 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:38.354 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:38.354 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:38.354 Initialization complete. Launching workers. 00:07:38.354 ======================================================== 00:07:38.354 Latency(us) 00:07:38.354 Device Information : IOPS MiB/s Average min max 00:07:38.354 PCIE (0000:00:10.0) NSID 1 from core 0: 17513.51 205.24 7312.93 5594.17 37541.18 00:07:38.354 PCIE (0000:00:11.0) NSID 1 from core 0: 17513.51 205.24 7294.40 5679.89 34745.13 00:07:38.354 PCIE (0000:00:13.0) NSID 1 from core 0: 17513.51 205.24 7274.55 5679.72 32111.22 00:07:38.354 PCIE (0000:00:12.0) NSID 1 from core 0: 17513.51 205.24 7254.24 5697.09 29254.21 00:07:38.354 PCIE (0000:00:12.0) NSID 2 from core 0: 17513.51 205.24 7233.54 5692.89 26369.84 00:07:38.354 PCIE (0000:00:12.0) NSID 3 from core 0: 17513.51 205.24 7213.62 5688.89 23526.60 00:07:38.354 ======================================================== 00:07:38.354 Total : 105081.08 1231.42 7263.88 5594.17 37541.18 00:07:38.354 00:07:38.354 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:38.354 ================================================================================= 00:07:38.354 1.00000% : 5747.003us 00:07:38.354 10.00000% : 5999.065us 00:07:38.354 25.00000% : 6251.126us 00:07:38.354 50.00000% : 6654.425us 00:07:38.354 75.00000% : 7965.145us 00:07:38.354 90.00000% : 8973.391us 00:07:38.354 95.00000% : 9880.812us 00:07:38.354 98.00000% : 11393.182us 00:07:38.354 99.00000% : 12502.252us 00:07:38.354 99.50000% : 30852.332us 00:07:38.354 99.90000% : 36901.809us 00:07:38.354 99.99000% : 37506.757us 00:07:38.354 99.99900% : 37708.406us 00:07:38.354 99.99990% : 37708.406us 00:07:38.354 99.99999% : 37708.406us 00:07:38.354 00:07:38.354 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:38.354 ================================================================================= 00:07:38.354 1.00000% : 5822.622us 00:07:38.354 10.00000% : 6049.477us 00:07:38.354 25.00000% : 6276.332us 00:07:38.354 50.00000% : 6604.012us 00:07:38.354 75.00000% : 8015.557us 00:07:38.354 90.00000% : 8922.978us 00:07:38.354 95.00000% : 9729.575us 00:07:38.354 98.00000% : 11544.418us 00:07:38.354 99.00000% : 12703.902us 00:07:38.354 99.50000% : 28230.892us 00:07:38.354 99.90000% : 34280.369us 00:07:38.354 99.99000% : 34885.317us 00:07:38.354 99.99900% : 34885.317us 00:07:38.354 99.99990% : 34885.317us 00:07:38.354 99.99999% : 34885.317us 00:07:38.354 00:07:38.354 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:38.354 ================================================================================= 00:07:38.354 1.00000% : 5822.622us 00:07:38.354 10.00000% : 6049.477us 00:07:38.354 25.00000% : 6276.332us 00:07:38.354 50.00000% : 6604.012us 00:07:38.354 75.00000% : 7965.145us 00:07:38.354 90.00000% : 8973.391us 00:07:38.354 95.00000% : 9931.225us 00:07:38.354 98.00000% : 11342.769us 00:07:38.354 99.00000% : 12250.191us 00:07:38.354 99.50000% : 25508.628us 00:07:38.354 99.90000% : 31658.929us 00:07:38.354 99.99000% : 32263.877us 00:07:38.354 99.99900% : 32263.877us 00:07:38.354 99.99990% : 32263.877us 00:07:38.354 99.99999% : 32263.877us 00:07:38.354 00:07:38.354 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:38.354 ================================================================================= 00:07:38.354 1.00000% : 5822.622us 00:07:38.354 10.00000% : 6049.477us 00:07:38.354 25.00000% : 6276.332us 00:07:38.354 50.00000% : 6604.012us 00:07:38.354 75.00000% : 7965.145us 00:07:38.354 90.00000% : 8973.391us 00:07:38.354 95.00000% : 9981.637us 00:07:38.354 98.00000% : 11191.532us 00:07:38.354 99.00000% : 12098.954us 00:07:38.354 99.50000% : 22685.538us 00:07:38.354 99.90000% : 28835.840us 00:07:38.354 99.99000% : 29239.138us 00:07:38.354 99.99900% : 29440.788us 00:07:38.354 99.99990% : 29440.788us 00:07:38.354 99.99999% : 29440.788us 00:07:38.354 00:07:38.354 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:38.354 ================================================================================= 00:07:38.354 1.00000% : 5822.622us 00:07:38.354 10.00000% : 6049.477us 00:07:38.354 25.00000% : 6276.332us 00:07:38.354 50.00000% : 6604.012us 00:07:38.354 75.00000% : 7965.145us 00:07:38.354 90.00000% : 8973.391us 00:07:38.354 95.00000% : 9981.637us 00:07:38.354 98.00000% : 11040.295us 00:07:38.354 99.00000% : 11897.305us 00:07:38.354 99.50000% : 19761.625us 00:07:38.354 99.90000% : 25811.102us 00:07:38.354 99.99000% : 26416.049us 00:07:38.354 99.99900% : 26416.049us 00:07:38.354 99.99990% : 26416.049us 00:07:38.354 99.99999% : 26416.049us 00:07:38.354 00:07:38.354 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:38.354 ================================================================================= 00:07:38.354 1.00000% : 5822.622us 00:07:38.354 10.00000% : 6049.477us 00:07:38.354 25.00000% : 6276.332us 00:07:38.354 50.00000% : 6604.012us 00:07:38.354 75.00000% : 8015.557us 00:07:38.354 90.00000% : 8922.978us 00:07:38.354 95.00000% : 9981.637us 00:07:38.354 98.00000% : 11090.708us 00:07:38.354 99.00000% : 12250.191us 00:07:38.354 99.50000% : 16938.535us 00:07:38.354 99.90000% : 22988.012us 00:07:38.354 99.99000% : 23592.960us 00:07:38.354 99.99900% : 23592.960us 00:07:38.354 99.99990% : 23592.960us 00:07:38.354 99.99999% : 23592.960us 00:07:38.354 00:07:38.354 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:38.354 ============================================================================== 00:07:38.354 Range in us Cumulative IO count 00:07:38.354 5570.560 - 5595.766: 0.0114% ( 2) 00:07:38.354 5595.766 - 5620.972: 0.0228% ( 2) 00:07:38.354 5620.972 - 5646.178: 0.0969% ( 13) 00:07:38.354 5646.178 - 5671.385: 0.2053% ( 19) 00:07:38.354 5671.385 - 5696.591: 0.3821% ( 31) 00:07:38.354 5696.591 - 5721.797: 0.6672% ( 50) 00:07:38.354 5721.797 - 5747.003: 1.0265% ( 63) 00:07:38.354 5747.003 - 5772.209: 1.4998% ( 83) 00:07:38.354 5772.209 - 5797.415: 2.1613% ( 116) 00:07:38.354 5797.415 - 5822.622: 2.9311% ( 135) 00:07:38.354 5822.622 - 5847.828: 3.7523% ( 144) 00:07:38.354 5847.828 - 5873.034: 4.7901% ( 182) 00:07:38.354 5873.034 - 5898.240: 5.8280% ( 182) 00:07:38.354 5898.240 - 5923.446: 6.9514% ( 197) 00:07:38.354 5923.446 - 5948.652: 8.0805% ( 198) 00:07:38.354 5948.652 - 5973.858: 9.3579% ( 224) 00:07:38.354 5973.858 - 5999.065: 10.7436% ( 243) 00:07:38.354 5999.065 - 6024.271: 12.0609% ( 231) 00:07:38.354 6024.271 - 6049.477: 13.4865% ( 250) 00:07:38.354 6049.477 - 6074.683: 14.8495% ( 239) 00:07:38.354 6074.683 - 6099.889: 16.3321% ( 260) 00:07:38.354 6099.889 - 6125.095: 17.8889% ( 273) 00:07:38.354 6125.095 - 6150.302: 19.3488% ( 256) 00:07:38.354 6150.302 - 6175.508: 20.9170% ( 275) 00:07:38.354 6175.508 - 6200.714: 22.4396% ( 267) 00:07:38.354 6200.714 - 6225.920: 23.9336% ( 262) 00:07:38.354 6225.920 - 6251.126: 25.6159% ( 295) 00:07:38.354 6251.126 - 6276.332: 27.0814% ( 257) 00:07:38.354 6276.332 - 6301.538: 28.8606% ( 312) 00:07:38.354 6301.538 - 6326.745: 30.4174% ( 273) 00:07:38.354 6326.745 - 6351.951: 32.0484% ( 286) 00:07:38.354 6351.951 - 6377.157: 33.7078% ( 291) 00:07:38.354 6377.157 - 6402.363: 35.2988% ( 279) 00:07:38.354 6402.363 - 6427.569: 36.9012% ( 281) 00:07:38.354 6427.569 - 6452.775: 38.6690% ( 310) 00:07:38.354 6452.775 - 6503.188: 41.9138% ( 569) 00:07:38.354 6503.188 - 6553.600: 45.0673% ( 553) 00:07:38.354 6553.600 - 6604.012: 48.2721% ( 562) 00:07:38.354 6604.012 - 6654.425: 51.5568% ( 576) 00:07:38.354 6654.425 - 6704.837: 54.7274% ( 556) 00:07:38.354 6704.837 - 6755.249: 57.6186% ( 507) 00:07:38.354 6755.249 - 6805.662: 60.1791% ( 449) 00:07:38.354 6805.662 - 6856.074: 62.3688% ( 384) 00:07:38.354 6856.074 - 6906.486: 63.9713% ( 281) 00:07:38.354 6906.486 - 6956.898: 65.2543% ( 225) 00:07:38.354 6956.898 - 7007.311: 66.3492% ( 192) 00:07:38.354 7007.311 - 7057.723: 67.1761% ( 145) 00:07:38.354 7057.723 - 7108.135: 67.7806% ( 106) 00:07:38.354 7108.135 - 7158.548: 68.2653% ( 85) 00:07:38.354 7158.548 - 7208.960: 68.5960% ( 58) 00:07:38.354 7208.960 - 7259.372: 68.9097% ( 55) 00:07:38.354 7259.372 - 7309.785: 69.1492% ( 42) 00:07:38.354 7309.785 - 7360.197: 69.3602% ( 37) 00:07:38.354 7360.197 - 7410.609: 69.6966% ( 59) 00:07:38.354 7410.609 - 7461.022: 70.0787% ( 67) 00:07:38.354 7461.022 - 7511.434: 70.5577% ( 84) 00:07:38.354 7511.434 - 7561.846: 70.9740% ( 73) 00:07:38.354 7561.846 - 7612.258: 71.3846% ( 72) 00:07:38.354 7612.258 - 7662.671: 71.8123% ( 75) 00:07:38.354 7662.671 - 7713.083: 72.3198% ( 89) 00:07:38.354 7713.083 - 7763.495: 72.7589% ( 77) 00:07:38.354 7763.495 - 7813.908: 73.2721% ( 90) 00:07:38.354 7813.908 - 7864.320: 73.9051% ( 111) 00:07:38.354 7864.320 - 7914.732: 74.6407% ( 129) 00:07:38.354 7914.732 - 7965.145: 75.3479% ( 124) 00:07:38.354 7965.145 - 8015.557: 76.0664% ( 126) 00:07:38.354 8015.557 - 8065.969: 76.8191% ( 132) 00:07:38.354 8065.969 - 8116.382: 77.6175% ( 140) 00:07:38.354 8116.382 - 8166.794: 78.3132% ( 122) 00:07:38.354 8166.794 - 8217.206: 79.0260% ( 125) 00:07:38.354 8217.206 - 8267.618: 79.8130% ( 138) 00:07:38.354 8267.618 - 8318.031: 80.5144% ( 123) 00:07:38.354 8318.031 - 8368.443: 81.3070% ( 139) 00:07:38.354 8368.443 - 8418.855: 82.0712% ( 134) 00:07:38.354 8418.855 - 8469.268: 82.8353% ( 134) 00:07:38.354 8469.268 - 8519.680: 83.5938% ( 133) 00:07:38.354 8519.680 - 8570.092: 84.3921% ( 140) 00:07:38.354 8570.092 - 8620.505: 85.1734% ( 137) 00:07:38.354 8620.505 - 8670.917: 85.9717% ( 140) 00:07:38.354 8670.917 - 8721.329: 86.7986% ( 145) 00:07:38.354 8721.329 - 8771.742: 87.5570% ( 133) 00:07:38.354 8771.742 - 8822.154: 88.3611% ( 141) 00:07:38.354 8822.154 - 8872.566: 89.0112% ( 114) 00:07:38.354 8872.566 - 8922.978: 89.6271% ( 108) 00:07:38.354 8922.978 - 8973.391: 90.1574% ( 93) 00:07:38.354 8973.391 - 9023.803: 90.7219% ( 99) 00:07:38.354 9023.803 - 9074.215: 91.1610% ( 77) 00:07:38.354 9074.215 - 9124.628: 91.6914% ( 93) 00:07:38.354 9124.628 - 9175.040: 92.1191% ( 75) 00:07:38.354 9175.040 - 9225.452: 92.5011% ( 67) 00:07:38.354 9225.452 - 9275.865: 92.8718% ( 65) 00:07:38.354 9275.865 - 9326.277: 93.1113% ( 42) 00:07:38.354 9326.277 - 9376.689: 93.3679% ( 45) 00:07:38.354 9376.689 - 9427.102: 93.6017% ( 41) 00:07:38.354 9427.102 - 9477.514: 93.8184% ( 38) 00:07:38.354 9477.514 - 9527.926: 94.0009% ( 32) 00:07:38.354 9527.926 - 9578.338: 94.1948% ( 34) 00:07:38.354 9578.338 - 9628.751: 94.3773% ( 32) 00:07:38.354 9628.751 - 9679.163: 94.5084% ( 23) 00:07:38.354 9679.163 - 9729.575: 94.6681% ( 28) 00:07:38.354 9729.575 - 9779.988: 94.8278% ( 28) 00:07:38.354 9779.988 - 9830.400: 94.9589% ( 23) 00:07:38.354 9830.400 - 9880.812: 95.1072% ( 26) 00:07:38.354 9880.812 - 9931.225: 95.2270% ( 21) 00:07:38.354 9931.225 - 9981.637: 95.3809% ( 27) 00:07:38.354 9981.637 - 10032.049: 95.4950% ( 20) 00:07:38.354 10032.049 - 10082.462: 95.6318% ( 24) 00:07:38.354 10082.462 - 10132.874: 95.7630% ( 23) 00:07:38.354 10132.874 - 10183.286: 95.8885% ( 22) 00:07:38.354 10183.286 - 10233.698: 96.0196% ( 23) 00:07:38.354 10233.698 - 10284.111: 96.1280% ( 19) 00:07:38.354 10284.111 - 10334.523: 96.2363% ( 19) 00:07:38.354 10334.523 - 10384.935: 96.3447% ( 19) 00:07:38.354 10384.935 - 10435.348: 96.4587% ( 20) 00:07:38.354 10435.348 - 10485.760: 96.5614% ( 18) 00:07:38.355 10485.760 - 10536.172: 96.6640% ( 18) 00:07:38.355 10536.172 - 10586.585: 96.7781% ( 20) 00:07:38.355 10586.585 - 10636.997: 96.8522% ( 13) 00:07:38.355 10636.997 - 10687.409: 96.9434% ( 16) 00:07:38.355 10687.409 - 10737.822: 97.0233% ( 14) 00:07:38.355 10737.822 - 10788.234: 97.1145% ( 16) 00:07:38.355 10788.234 - 10838.646: 97.2057% ( 16) 00:07:38.355 10838.646 - 10889.058: 97.2685% ( 11) 00:07:38.355 10889.058 - 10939.471: 97.3540% ( 15) 00:07:38.355 10939.471 - 10989.883: 97.4396% ( 15) 00:07:38.355 10989.883 - 11040.295: 97.5194% ( 14) 00:07:38.355 11040.295 - 11090.708: 97.5992% ( 14) 00:07:38.355 11090.708 - 11141.120: 97.6848% ( 15) 00:07:38.355 11141.120 - 11191.532: 97.7760% ( 16) 00:07:38.355 11191.532 - 11241.945: 97.8558% ( 14) 00:07:38.355 11241.945 - 11292.357: 97.9129% ( 10) 00:07:38.355 11292.357 - 11342.769: 97.9642% ( 9) 00:07:38.355 11342.769 - 11393.182: 98.0041% ( 7) 00:07:38.355 11393.182 - 11443.594: 98.0611% ( 10) 00:07:38.355 11443.594 - 11494.006: 98.1125% ( 9) 00:07:38.355 11494.006 - 11544.418: 98.1695% ( 10) 00:07:38.355 11544.418 - 11594.831: 98.2151% ( 8) 00:07:38.355 11594.831 - 11645.243: 98.2664% ( 9) 00:07:38.355 11645.243 - 11695.655: 98.3177% ( 9) 00:07:38.355 11695.655 - 11746.068: 98.3862% ( 12) 00:07:38.355 11746.068 - 11796.480: 98.4318% ( 8) 00:07:38.355 11796.480 - 11846.892: 98.4888% ( 10) 00:07:38.355 11846.892 - 11897.305: 98.5401% ( 9) 00:07:38.355 11897.305 - 11947.717: 98.5972% ( 10) 00:07:38.355 11947.717 - 11998.129: 98.6428% ( 8) 00:07:38.355 11998.129 - 12048.542: 98.6770% ( 6) 00:07:38.355 12048.542 - 12098.954: 98.7169% ( 7) 00:07:38.355 12098.954 - 12149.366: 98.7568% ( 7) 00:07:38.355 12149.366 - 12199.778: 98.7968% ( 7) 00:07:38.355 12199.778 - 12250.191: 98.8367% ( 7) 00:07:38.355 12250.191 - 12300.603: 98.8766% ( 7) 00:07:38.355 12300.603 - 12351.015: 98.9108% ( 6) 00:07:38.355 12351.015 - 12401.428: 98.9393% ( 5) 00:07:38.355 12401.428 - 12451.840: 98.9849% ( 8) 00:07:38.355 12451.840 - 12502.252: 99.0135% ( 5) 00:07:38.355 12502.252 - 12552.665: 99.0363% ( 4) 00:07:38.355 12552.665 - 12603.077: 99.0591% ( 4) 00:07:38.355 12603.077 - 12653.489: 99.0933% ( 6) 00:07:38.355 12653.489 - 12703.902: 99.1104% ( 3) 00:07:38.355 12703.902 - 12754.314: 99.1332% ( 4) 00:07:38.355 12754.314 - 12804.726: 99.1560% ( 4) 00:07:38.355 12804.726 - 12855.138: 99.1731% ( 3) 00:07:38.355 12855.138 - 12905.551: 99.1902% ( 3) 00:07:38.355 12905.551 - 13006.375: 99.2073% ( 3) 00:07:38.355 13006.375 - 13107.200: 99.2245% ( 3) 00:07:38.355 13107.200 - 13208.025: 99.2416% ( 3) 00:07:38.355 13208.025 - 13308.849: 99.2587% ( 3) 00:07:38.355 13308.849 - 13409.674: 99.2701% ( 2) 00:07:38.355 29037.489 - 29239.138: 99.2872% ( 3) 00:07:38.355 29239.138 - 29440.788: 99.3157% ( 5) 00:07:38.355 29440.788 - 29642.437: 99.3385% ( 4) 00:07:38.355 29642.437 - 29844.086: 99.3727% ( 6) 00:07:38.355 29844.086 - 30045.735: 99.4069% ( 6) 00:07:38.355 30045.735 - 30247.385: 99.4354% ( 5) 00:07:38.355 30247.385 - 30449.034: 99.4640% ( 5) 00:07:38.355 30449.034 - 30650.683: 99.4982% ( 6) 00:07:38.355 30650.683 - 30852.332: 99.5267% ( 5) 00:07:38.355 30852.332 - 31053.982: 99.5495% ( 4) 00:07:38.355 31053.982 - 31255.631: 99.5837% ( 6) 00:07:38.355 31255.631 - 31457.280: 99.6122% ( 5) 00:07:38.355 31457.280 - 31658.929: 99.6350% ( 4) 00:07:38.355 34885.317 - 35086.966: 99.6407% ( 1) 00:07:38.355 35086.966 - 35288.615: 99.6635% ( 4) 00:07:38.355 35288.615 - 35490.265: 99.6978% ( 6) 00:07:38.355 35490.265 - 35691.914: 99.7263% ( 5) 00:07:38.355 35691.914 - 35893.563: 99.7548% ( 5) 00:07:38.355 35893.563 - 36095.212: 99.7833% ( 5) 00:07:38.355 36095.212 - 36296.862: 99.8118% ( 5) 00:07:38.355 36296.862 - 36498.511: 99.8460% ( 6) 00:07:38.355 36498.511 - 36700.160: 99.8745% ( 5) 00:07:38.355 36700.160 - 36901.809: 99.9088% ( 6) 00:07:38.355 36901.809 - 37103.458: 99.9373% ( 5) 00:07:38.355 37103.458 - 37305.108: 99.9658% ( 5) 00:07:38.355 37305.108 - 37506.757: 99.9943% ( 5) 00:07:38.355 37506.757 - 37708.406: 100.0000% ( 1) 00:07:38.355 00:07:38.355 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:38.355 ============================================================================== 00:07:38.355 Range in us Cumulative IO count 00:07:38.355 5671.385 - 5696.591: 0.0171% ( 3) 00:07:38.355 5696.591 - 5721.797: 0.0798% ( 11) 00:07:38.355 5721.797 - 5747.003: 0.2053% ( 22) 00:07:38.355 5747.003 - 5772.209: 0.3935% ( 33) 00:07:38.355 5772.209 - 5797.415: 0.7584% ( 64) 00:07:38.355 5797.415 - 5822.622: 1.1576% ( 70) 00:07:38.355 5822.622 - 5847.828: 1.6994% ( 95) 00:07:38.355 5847.828 - 5873.034: 2.3323% ( 111) 00:07:38.355 5873.034 - 5898.240: 3.1649% ( 146) 00:07:38.355 5898.240 - 5923.446: 4.2256% ( 186) 00:07:38.355 5923.446 - 5948.652: 5.4003% ( 206) 00:07:38.355 5948.652 - 5973.858: 6.5522% ( 202) 00:07:38.355 5973.858 - 5999.065: 7.8695% ( 231) 00:07:38.355 5999.065 - 6024.271: 9.2381% ( 240) 00:07:38.355 6024.271 - 6049.477: 10.6923% ( 255) 00:07:38.355 6049.477 - 6074.683: 12.1921% ( 263) 00:07:38.355 6074.683 - 6099.889: 13.7603% ( 275) 00:07:38.355 6099.889 - 6125.095: 15.5395% ( 312) 00:07:38.355 6125.095 - 6150.302: 17.2616% ( 302) 00:07:38.355 6150.302 - 6175.508: 18.9268% ( 292) 00:07:38.355 6175.508 - 6200.714: 20.6775% ( 307) 00:07:38.355 6200.714 - 6225.920: 22.4624% ( 313) 00:07:38.355 6225.920 - 6251.126: 24.2758% ( 318) 00:07:38.355 6251.126 - 6276.332: 26.1234% ( 324) 00:07:38.355 6276.332 - 6301.538: 28.0052% ( 330) 00:07:38.355 6301.538 - 6326.745: 29.8871% ( 330) 00:07:38.355 6326.745 - 6351.951: 31.7005% ( 318) 00:07:38.355 6351.951 - 6377.157: 33.5823% ( 330) 00:07:38.355 6377.157 - 6402.363: 35.4870% ( 334) 00:07:38.355 6402.363 - 6427.569: 37.4601% ( 346) 00:07:38.355 6427.569 - 6452.775: 39.3932% ( 339) 00:07:38.355 6452.775 - 6503.188: 43.1683% ( 662) 00:07:38.355 6503.188 - 6553.600: 47.0119% ( 674) 00:07:38.355 6553.600 - 6604.012: 50.7698% ( 659) 00:07:38.355 6604.012 - 6654.425: 54.3967% ( 636) 00:07:38.355 6654.425 - 6704.837: 57.6015% ( 562) 00:07:38.355 6704.837 - 6755.249: 60.3216% ( 477) 00:07:38.355 6755.249 - 6805.662: 62.3917% ( 363) 00:07:38.355 6805.662 - 6856.074: 64.0625% ( 293) 00:07:38.355 6856.074 - 6906.486: 65.4083% ( 236) 00:07:38.355 6906.486 - 6956.898: 66.4918% ( 190) 00:07:38.355 6956.898 - 7007.311: 67.2787% ( 138) 00:07:38.355 7007.311 - 7057.723: 67.8547% ( 101) 00:07:38.355 7057.723 - 7108.135: 68.3109% ( 80) 00:07:38.355 7108.135 - 7158.548: 68.6816% ( 65) 00:07:38.355 7158.548 - 7208.960: 68.9781% ( 52) 00:07:38.355 7208.960 - 7259.372: 69.1834% ( 36) 00:07:38.355 7259.372 - 7309.785: 69.3659% ( 32) 00:07:38.355 7309.785 - 7360.197: 69.4913% ( 22) 00:07:38.355 7360.197 - 7410.609: 69.6339% ( 25) 00:07:38.355 7410.609 - 7461.022: 69.7993% ( 29) 00:07:38.355 7461.022 - 7511.434: 69.9760% ( 31) 00:07:38.355 7511.434 - 7561.846: 70.2954% ( 56) 00:07:38.355 7561.846 - 7612.258: 70.7231% ( 75) 00:07:38.355 7612.258 - 7662.671: 71.1850% ( 81) 00:07:38.355 7662.671 - 7713.083: 71.6640% ( 84) 00:07:38.355 7713.083 - 7763.495: 72.1715% ( 89) 00:07:38.355 7763.495 - 7813.908: 72.7076% ( 94) 00:07:38.355 7813.908 - 7864.320: 73.3234% ( 108) 00:07:38.355 7864.320 - 7914.732: 73.9222% ( 105) 00:07:38.355 7914.732 - 7965.145: 74.6407% ( 126) 00:07:38.355 7965.145 - 8015.557: 75.4106% ( 135) 00:07:38.355 8015.557 - 8065.969: 76.1576% ( 131) 00:07:38.355 8065.969 - 8116.382: 76.9446% ( 138) 00:07:38.355 8116.382 - 8166.794: 77.7372% ( 139) 00:07:38.355 8166.794 - 8217.206: 78.5242% ( 138) 00:07:38.355 8217.206 - 8267.618: 79.4309% ( 159) 00:07:38.355 8267.618 - 8318.031: 80.2920% ( 151) 00:07:38.355 8318.031 - 8368.443: 81.1816% ( 156) 00:07:38.355 8368.443 - 8418.855: 82.1396% ( 168) 00:07:38.355 8418.855 - 8469.268: 83.0634% ( 162) 00:07:38.355 8469.268 - 8519.680: 84.0385% ( 171) 00:07:38.355 8519.680 - 8570.092: 84.9281% ( 156) 00:07:38.355 8570.092 - 8620.505: 85.8292% ( 158) 00:07:38.355 8620.505 - 8670.917: 86.7016% ( 153) 00:07:38.355 8670.917 - 8721.329: 87.5855% ( 155) 00:07:38.355 8721.329 - 8771.742: 88.3668% ( 137) 00:07:38.355 8771.742 - 8822.154: 89.0397% ( 118) 00:07:38.355 8822.154 - 8872.566: 89.6442% ( 106) 00:07:38.355 8872.566 - 8922.978: 90.2943% ( 114) 00:07:38.355 8922.978 - 8973.391: 90.9329% ( 112) 00:07:38.355 8973.391 - 9023.803: 91.4804% ( 96) 00:07:38.355 9023.803 - 9074.215: 91.9993% ( 91) 00:07:38.355 9074.215 - 9124.628: 92.4156% ( 73) 00:07:38.355 9124.628 - 9175.040: 92.7749% ( 63) 00:07:38.355 9175.040 - 9225.452: 93.0315% ( 45) 00:07:38.355 9225.452 - 9275.865: 93.2653% ( 41) 00:07:38.355 9275.865 - 9326.277: 93.4934% ( 40) 00:07:38.355 9326.277 - 9376.689: 93.7215% ( 40) 00:07:38.355 9376.689 - 9427.102: 93.9325% ( 37) 00:07:38.355 9427.102 - 9477.514: 94.1321% ( 35) 00:07:38.355 9477.514 - 9527.926: 94.3659% ( 41) 00:07:38.355 9527.926 - 9578.338: 94.5712% ( 36) 00:07:38.355 9578.338 - 9628.751: 94.7365% ( 29) 00:07:38.355 9628.751 - 9679.163: 94.9304% ( 34) 00:07:38.355 9679.163 - 9729.575: 95.0673% ( 24) 00:07:38.355 9729.575 - 9779.988: 95.2327% ( 29) 00:07:38.355 9779.988 - 9830.400: 95.3695% ( 24) 00:07:38.355 9830.400 - 9880.812: 95.5007% ( 23) 00:07:38.355 9880.812 - 9931.225: 95.6147% ( 20) 00:07:38.355 9931.225 - 9981.637: 95.7402% ( 22) 00:07:38.355 9981.637 - 10032.049: 95.8143% ( 13) 00:07:38.355 10032.049 - 10082.462: 95.8885% ( 13) 00:07:38.355 10082.462 - 10132.874: 95.9569% ( 12) 00:07:38.355 10132.874 - 10183.286: 96.0139% ( 10) 00:07:38.355 10183.286 - 10233.698: 96.0595% ( 8) 00:07:38.355 10233.698 - 10284.111: 96.1337% ( 13) 00:07:38.355 10284.111 - 10334.523: 96.1907% ( 10) 00:07:38.355 10334.523 - 10384.935: 96.2591% ( 12) 00:07:38.355 10384.935 - 10435.348: 96.3333% ( 13) 00:07:38.355 10435.348 - 10485.760: 96.3903% ( 10) 00:07:38.355 10485.760 - 10536.172: 96.4416% ( 9) 00:07:38.355 10536.172 - 10586.585: 96.5271% ( 15) 00:07:38.355 10586.585 - 10636.997: 96.6127% ( 15) 00:07:38.355 10636.997 - 10687.409: 96.6982% ( 15) 00:07:38.355 10687.409 - 10737.822: 96.7781% ( 14) 00:07:38.355 10737.822 - 10788.234: 96.8636% ( 15) 00:07:38.355 10788.234 - 10838.646: 96.9434% ( 14) 00:07:38.355 10838.646 - 10889.058: 97.0119% ( 12) 00:07:38.355 10889.058 - 10939.471: 97.0746% ( 11) 00:07:38.355 10939.471 - 10989.883: 97.1430% ( 12) 00:07:38.355 10989.883 - 11040.295: 97.2400% ( 17) 00:07:38.355 11040.295 - 11090.708: 97.3198% ( 14) 00:07:38.355 11090.708 - 11141.120: 97.4167% ( 17) 00:07:38.355 11141.120 - 11191.532: 97.5080% ( 16) 00:07:38.355 11191.532 - 11241.945: 97.5992% ( 16) 00:07:38.355 11241.945 - 11292.357: 97.6848% ( 15) 00:07:38.355 11292.357 - 11342.769: 97.7703% ( 15) 00:07:38.355 11342.769 - 11393.182: 97.8558% ( 15) 00:07:38.355 11393.182 - 11443.594: 97.9129% ( 10) 00:07:38.355 11443.594 - 11494.006: 97.9699% ( 10) 00:07:38.355 11494.006 - 11544.418: 98.0269% ( 10) 00:07:38.355 11544.418 - 11594.831: 98.0839% ( 10) 00:07:38.355 11594.831 - 11645.243: 98.1410% ( 10) 00:07:38.355 11645.243 - 11695.655: 98.1866% ( 8) 00:07:38.355 11695.655 - 11746.068: 98.2436% ( 10) 00:07:38.355 11746.068 - 11796.480: 98.3063% ( 11) 00:07:38.355 11796.480 - 11846.892: 98.3520% ( 8) 00:07:38.355 11846.892 - 11897.305: 98.4033% ( 9) 00:07:38.355 11897.305 - 11947.717: 98.4375% ( 6) 00:07:38.355 11947.717 - 11998.129: 98.4660% ( 5) 00:07:38.355 11998.129 - 12048.542: 98.5002% ( 6) 00:07:38.355 12048.542 - 12098.954: 98.5516% ( 9) 00:07:38.355 12098.954 - 12149.366: 98.6029% ( 9) 00:07:38.355 12149.366 - 12199.778: 98.6599% ( 10) 00:07:38.355 12199.778 - 12250.191: 98.6998% ( 7) 00:07:38.355 12250.191 - 12300.603: 98.7340% ( 6) 00:07:38.355 12300.603 - 12351.015: 98.7682% ( 6) 00:07:38.356 12351.015 - 12401.428: 98.8082% ( 7) 00:07:38.356 12401.428 - 12451.840: 98.8367% ( 5) 00:07:38.356 12451.840 - 12502.252: 98.8709% ( 6) 00:07:38.356 12502.252 - 12552.665: 98.9051% ( 6) 00:07:38.356 12552.665 - 12603.077: 98.9393% ( 6) 00:07:38.356 12603.077 - 12653.489: 98.9735% ( 6) 00:07:38.356 12653.489 - 12703.902: 99.0078% ( 6) 00:07:38.356 12703.902 - 12754.314: 99.0363% ( 5) 00:07:38.356 12754.314 - 12804.726: 99.0705% ( 6) 00:07:38.356 12804.726 - 12855.138: 99.1047% ( 6) 00:07:38.356 12855.138 - 12905.551: 99.1446% ( 7) 00:07:38.356 12905.551 - 13006.375: 99.1788% ( 6) 00:07:38.356 13006.375 - 13107.200: 99.2016% ( 4) 00:07:38.356 13107.200 - 13208.025: 99.2302% ( 5) 00:07:38.356 13208.025 - 13308.849: 99.2530% ( 4) 00:07:38.356 13308.849 - 13409.674: 99.2701% ( 3) 00:07:38.356 26416.049 - 26617.698: 99.2758% ( 1) 00:07:38.356 26617.698 - 26819.348: 99.3043% ( 5) 00:07:38.356 26819.348 - 27020.997: 99.3385% ( 6) 00:07:38.356 27020.997 - 27222.646: 99.3727% ( 6) 00:07:38.356 27222.646 - 27424.295: 99.4012% ( 5) 00:07:38.356 27424.295 - 27625.945: 99.4297% ( 5) 00:07:38.356 27625.945 - 27827.594: 99.4640% ( 6) 00:07:38.356 27827.594 - 28029.243: 99.4982% ( 6) 00:07:38.356 28029.243 - 28230.892: 99.5267% ( 5) 00:07:38.356 28230.892 - 28432.542: 99.5609% ( 6) 00:07:38.356 28432.542 - 28634.191: 99.5951% ( 6) 00:07:38.356 28634.191 - 28835.840: 99.6236% ( 5) 00:07:38.356 28835.840 - 29037.489: 99.6350% ( 2) 00:07:38.356 32465.526 - 32667.175: 99.6635% ( 5) 00:07:38.356 32667.175 - 32868.825: 99.6978% ( 6) 00:07:38.356 32868.825 - 33070.474: 99.7263% ( 5) 00:07:38.356 33070.474 - 33272.123: 99.7605% ( 6) 00:07:38.356 33272.123 - 33473.772: 99.7947% ( 6) 00:07:38.356 33473.772 - 33675.422: 99.8232% ( 5) 00:07:38.356 33675.422 - 33877.071: 99.8574% ( 6) 00:07:38.356 33877.071 - 34078.720: 99.8917% ( 6) 00:07:38.356 34078.720 - 34280.369: 99.9259% ( 6) 00:07:38.356 34280.369 - 34482.018: 99.9601% ( 6) 00:07:38.356 34482.018 - 34683.668: 99.9886% ( 5) 00:07:38.356 34683.668 - 34885.317: 100.0000% ( 2) 00:07:38.356 00:07:38.356 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:38.356 ============================================================================== 00:07:38.356 Range in us Cumulative IO count 00:07:38.356 5671.385 - 5696.591: 0.0114% ( 2) 00:07:38.356 5696.591 - 5721.797: 0.0912% ( 14) 00:07:38.356 5721.797 - 5747.003: 0.1825% ( 16) 00:07:38.356 5747.003 - 5772.209: 0.4106% ( 40) 00:07:38.356 5772.209 - 5797.415: 0.6729% ( 46) 00:07:38.356 5797.415 - 5822.622: 1.1348% ( 81) 00:07:38.356 5822.622 - 5847.828: 1.5967% ( 81) 00:07:38.356 5847.828 - 5873.034: 2.2753% ( 119) 00:07:38.356 5873.034 - 5898.240: 3.1877% ( 160) 00:07:38.356 5898.240 - 5923.446: 4.2427% ( 185) 00:07:38.356 5923.446 - 5948.652: 5.3946% ( 202) 00:07:38.356 5948.652 - 5973.858: 6.6948% ( 228) 00:07:38.356 5973.858 - 5999.065: 8.0520% ( 238) 00:07:38.356 5999.065 - 6024.271: 9.6031% ( 272) 00:07:38.356 6024.271 - 6049.477: 11.1770% ( 276) 00:07:38.356 6049.477 - 6074.683: 12.7338% ( 273) 00:07:38.356 6074.683 - 6099.889: 14.4332% ( 298) 00:07:38.356 6099.889 - 6125.095: 16.1211% ( 296) 00:07:38.356 6125.095 - 6150.302: 17.9117% ( 314) 00:07:38.356 6150.302 - 6175.508: 19.6738% ( 309) 00:07:38.356 6175.508 - 6200.714: 21.4530% ( 312) 00:07:38.356 6200.714 - 6225.920: 23.2208% ( 310) 00:07:38.356 6225.920 - 6251.126: 24.9943% ( 311) 00:07:38.356 6251.126 - 6276.332: 26.7735% ( 312) 00:07:38.356 6276.332 - 6301.538: 28.6097% ( 322) 00:07:38.356 6301.538 - 6326.745: 30.3889% ( 312) 00:07:38.356 6326.745 - 6351.951: 32.2023% ( 318) 00:07:38.356 6351.951 - 6377.157: 33.9986% ( 315) 00:07:38.356 6377.157 - 6402.363: 35.8234% ( 320) 00:07:38.356 6402.363 - 6427.569: 37.6255% ( 316) 00:07:38.356 6427.569 - 6452.775: 39.4788% ( 325) 00:07:38.356 6452.775 - 6503.188: 43.1569% ( 645) 00:07:38.356 6503.188 - 6553.600: 46.8408% ( 646) 00:07:38.356 6553.600 - 6604.012: 50.4220% ( 628) 00:07:38.356 6604.012 - 6654.425: 53.8093% ( 594) 00:07:38.356 6654.425 - 6704.837: 56.9058% ( 543) 00:07:38.356 6704.837 - 6755.249: 59.5119% ( 457) 00:07:38.356 6755.249 - 6805.662: 61.5876% ( 364) 00:07:38.356 6805.662 - 6856.074: 63.0531% ( 257) 00:07:38.356 6856.074 - 6906.486: 64.2165% ( 204) 00:07:38.356 6906.486 - 6956.898: 65.2657% ( 184) 00:07:38.356 6956.898 - 7007.311: 66.0983% ( 146) 00:07:38.356 7007.311 - 7057.723: 66.6458% ( 96) 00:07:38.356 7057.723 - 7108.135: 67.0906% ( 78) 00:07:38.356 7108.135 - 7158.548: 67.4384% ( 61) 00:07:38.356 7158.548 - 7208.960: 67.7578% ( 56) 00:07:38.356 7208.960 - 7259.372: 68.0600% ( 53) 00:07:38.356 7259.372 - 7309.785: 68.3451% ( 50) 00:07:38.356 7309.785 - 7360.197: 68.6474% ( 53) 00:07:38.356 7360.197 - 7410.609: 68.9211% ( 48) 00:07:38.356 7410.609 - 7461.022: 69.1948% ( 48) 00:07:38.356 7461.022 - 7511.434: 69.4856% ( 51) 00:07:38.356 7511.434 - 7561.846: 69.9418% ( 80) 00:07:38.356 7561.846 - 7612.258: 70.4494% ( 89) 00:07:38.356 7612.258 - 7662.671: 71.0310% ( 102) 00:07:38.356 7662.671 - 7713.083: 71.6355% ( 106) 00:07:38.356 7713.083 - 7763.495: 72.2970% ( 116) 00:07:38.356 7763.495 - 7813.908: 72.9813% ( 120) 00:07:38.356 7813.908 - 7864.320: 73.6599% ( 119) 00:07:38.356 7864.320 - 7914.732: 74.4126% ( 132) 00:07:38.356 7914.732 - 7965.145: 75.2167% ( 141) 00:07:38.356 7965.145 - 8015.557: 76.0892% ( 153) 00:07:38.356 8015.557 - 8065.969: 77.0016% ( 160) 00:07:38.356 8065.969 - 8116.382: 77.8855% ( 155) 00:07:38.356 8116.382 - 8166.794: 78.7751% ( 156) 00:07:38.356 8166.794 - 8217.206: 79.6875% ( 160) 00:07:38.356 8217.206 - 8267.618: 80.5657% ( 154) 00:07:38.356 8267.618 - 8318.031: 81.4496% ( 155) 00:07:38.356 8318.031 - 8368.443: 82.3164% ( 152) 00:07:38.356 8368.443 - 8418.855: 83.1547% ( 147) 00:07:38.356 8418.855 - 8469.268: 83.9986% ( 148) 00:07:38.356 8469.268 - 8519.680: 84.8084% ( 142) 00:07:38.356 8519.680 - 8570.092: 85.5896% ( 137) 00:07:38.356 8570.092 - 8620.505: 86.3424% ( 132) 00:07:38.356 8620.505 - 8670.917: 87.0837% ( 130) 00:07:38.356 8670.917 - 8721.329: 87.7053% ( 109) 00:07:38.356 8721.329 - 8771.742: 88.2984% ( 104) 00:07:38.356 8771.742 - 8822.154: 88.8116% ( 90) 00:07:38.356 8822.154 - 8872.566: 89.2963% ( 85) 00:07:38.356 8872.566 - 8922.978: 89.7867% ( 86) 00:07:38.356 8922.978 - 8973.391: 90.3285% ( 95) 00:07:38.356 8973.391 - 9023.803: 90.8189% ( 86) 00:07:38.356 9023.803 - 9074.215: 91.3093% ( 86) 00:07:38.356 9074.215 - 9124.628: 91.7256% ( 73) 00:07:38.356 9124.628 - 9175.040: 92.0734% ( 61) 00:07:38.356 9175.040 - 9225.452: 92.3472% ( 48) 00:07:38.356 9225.452 - 9275.865: 92.6095% ( 46) 00:07:38.356 9275.865 - 9326.277: 92.8718% ( 46) 00:07:38.356 9326.277 - 9376.689: 93.1341% ( 46) 00:07:38.356 9376.689 - 9427.102: 93.3964% ( 46) 00:07:38.356 9427.102 - 9477.514: 93.6131% ( 38) 00:07:38.356 9477.514 - 9527.926: 93.7899% ( 31) 00:07:38.356 9527.926 - 9578.338: 93.9781% ( 33) 00:07:38.356 9578.338 - 9628.751: 94.1435% ( 29) 00:07:38.356 9628.751 - 9679.163: 94.2803% ( 24) 00:07:38.356 9679.163 - 9729.575: 94.4400% ( 28) 00:07:38.356 9729.575 - 9779.988: 94.5883% ( 26) 00:07:38.356 9779.988 - 9830.400: 94.7536% ( 29) 00:07:38.356 9830.400 - 9880.812: 94.9304% ( 31) 00:07:38.356 9880.812 - 9931.225: 95.0730% ( 25) 00:07:38.356 9931.225 - 9981.637: 95.2270% ( 27) 00:07:38.356 9981.637 - 10032.049: 95.3809% ( 27) 00:07:38.356 10032.049 - 10082.462: 95.5748% ( 34) 00:07:38.356 10082.462 - 10132.874: 95.7459% ( 30) 00:07:38.356 10132.874 - 10183.286: 95.8885% ( 25) 00:07:38.356 10183.286 - 10233.698: 96.0367% ( 26) 00:07:38.356 10233.698 - 10284.111: 96.1850% ( 26) 00:07:38.356 10284.111 - 10334.523: 96.2990% ( 20) 00:07:38.356 10334.523 - 10384.935: 96.4074% ( 19) 00:07:38.356 10384.935 - 10435.348: 96.5443% ( 24) 00:07:38.356 10435.348 - 10485.760: 96.6640% ( 21) 00:07:38.356 10485.760 - 10536.172: 96.7838% ( 21) 00:07:38.356 10536.172 - 10586.585: 96.8921% ( 19) 00:07:38.356 10586.585 - 10636.997: 97.0119% ( 21) 00:07:38.356 10636.997 - 10687.409: 97.1259% ( 20) 00:07:38.356 10687.409 - 10737.822: 97.2229% ( 17) 00:07:38.356 10737.822 - 10788.234: 97.3255% ( 18) 00:07:38.356 10788.234 - 10838.646: 97.4281% ( 18) 00:07:38.356 10838.646 - 10889.058: 97.5251% ( 17) 00:07:38.356 10889.058 - 10939.471: 97.5821% ( 10) 00:07:38.356 10939.471 - 10989.883: 97.6334% ( 9) 00:07:38.356 10989.883 - 11040.295: 97.6848% ( 9) 00:07:38.356 11040.295 - 11090.708: 97.7475% ( 11) 00:07:38.356 11090.708 - 11141.120: 97.7931% ( 8) 00:07:38.356 11141.120 - 11191.532: 97.8501% ( 10) 00:07:38.356 11191.532 - 11241.945: 97.8901% ( 7) 00:07:38.356 11241.945 - 11292.357: 97.9471% ( 10) 00:07:38.356 11292.357 - 11342.769: 98.0041% ( 10) 00:07:38.356 11342.769 - 11393.182: 98.0611% ( 10) 00:07:38.356 11393.182 - 11443.594: 98.1125% ( 9) 00:07:38.356 11443.594 - 11494.006: 98.1809% ( 12) 00:07:38.356 11494.006 - 11544.418: 98.2493% ( 12) 00:07:38.356 11544.418 - 11594.831: 98.3234% ( 13) 00:07:38.356 11594.831 - 11645.243: 98.3976% ( 13) 00:07:38.356 11645.243 - 11695.655: 98.4603% ( 11) 00:07:38.356 11695.655 - 11746.068: 98.5287% ( 12) 00:07:38.356 11746.068 - 11796.480: 98.6029% ( 13) 00:07:38.356 11796.480 - 11846.892: 98.6656% ( 11) 00:07:38.356 11846.892 - 11897.305: 98.7226% ( 10) 00:07:38.356 11897.305 - 11947.717: 98.7911% ( 12) 00:07:38.356 11947.717 - 11998.129: 98.8481% ( 10) 00:07:38.356 11998.129 - 12048.542: 98.8823% ( 6) 00:07:38.356 12048.542 - 12098.954: 98.9165% ( 6) 00:07:38.356 12098.954 - 12149.366: 98.9564% ( 7) 00:07:38.356 12149.366 - 12199.778: 98.9906% ( 6) 00:07:38.356 12199.778 - 12250.191: 99.0306% ( 7) 00:07:38.356 12250.191 - 12300.603: 99.0591% ( 5) 00:07:38.356 12300.603 - 12351.015: 99.0990% ( 7) 00:07:38.356 12351.015 - 12401.428: 99.1332% ( 6) 00:07:38.356 12401.428 - 12451.840: 99.1560% ( 4) 00:07:38.356 12451.840 - 12502.252: 99.1674% ( 2) 00:07:38.356 12502.252 - 12552.665: 99.1788% ( 2) 00:07:38.356 12552.665 - 12603.077: 99.1902% ( 2) 00:07:38.356 12603.077 - 12653.489: 99.2073% ( 3) 00:07:38.356 12653.489 - 12703.902: 99.2188% ( 2) 00:07:38.356 12703.902 - 12754.314: 99.2302% ( 2) 00:07:38.356 12754.314 - 12804.726: 99.2416% ( 2) 00:07:38.356 12804.726 - 12855.138: 99.2587% ( 3) 00:07:38.356 12855.138 - 12905.551: 99.2701% ( 2) 00:07:38.356 23996.258 - 24097.083: 99.2758% ( 1) 00:07:38.356 24097.083 - 24197.908: 99.2986% ( 4) 00:07:38.356 24197.908 - 24298.732: 99.3100% ( 2) 00:07:38.356 24298.732 - 24399.557: 99.3271% ( 3) 00:07:38.356 24399.557 - 24500.382: 99.3442% ( 3) 00:07:38.356 24500.382 - 24601.206: 99.3556% ( 2) 00:07:38.356 24601.206 - 24702.031: 99.3727% ( 3) 00:07:38.356 24702.031 - 24802.855: 99.3898% ( 3) 00:07:38.356 24802.855 - 24903.680: 99.4069% ( 3) 00:07:38.356 24903.680 - 25004.505: 99.4240% ( 3) 00:07:38.356 25004.505 - 25105.329: 99.4411% ( 3) 00:07:38.356 25105.329 - 25206.154: 99.4583% ( 3) 00:07:38.356 25206.154 - 25306.978: 99.4697% ( 2) 00:07:38.356 25306.978 - 25407.803: 99.4868% ( 3) 00:07:38.356 25407.803 - 25508.628: 99.5039% ( 3) 00:07:38.356 25508.628 - 25609.452: 99.5210% ( 3) 00:07:38.356 25609.452 - 25710.277: 99.5324% ( 2) 00:07:38.356 25710.277 - 25811.102: 99.5495% ( 3) 00:07:38.356 25811.102 - 26012.751: 99.5837% ( 6) 00:07:38.356 26012.751 - 26214.400: 99.6179% ( 6) 00:07:38.356 26214.400 - 26416.049: 99.6350% ( 3) 00:07:38.356 29844.086 - 30045.735: 99.6635% ( 5) 00:07:38.356 30045.735 - 30247.385: 99.6978% ( 6) 00:07:38.356 30247.385 - 30449.034: 99.7263% ( 5) 00:07:38.356 30449.034 - 30650.683: 99.7605% ( 6) 00:07:38.356 30650.683 - 30852.332: 99.7890% ( 5) 00:07:38.356 30852.332 - 31053.982: 99.8232% ( 6) 00:07:38.357 31053.982 - 31255.631: 99.8574% ( 6) 00:07:38.357 31255.631 - 31457.280: 99.8917% ( 6) 00:07:38.357 31457.280 - 31658.929: 99.9202% ( 5) 00:07:38.357 31658.929 - 31860.578: 99.9544% ( 6) 00:07:38.357 31860.578 - 32062.228: 99.9886% ( 6) 00:07:38.357 32062.228 - 32263.877: 100.0000% ( 2) 00:07:38.357 00:07:38.357 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:38.357 ============================================================================== 00:07:38.357 Range in us Cumulative IO count 00:07:38.357 5696.591 - 5721.797: 0.0684% ( 12) 00:07:38.357 5721.797 - 5747.003: 0.1540% ( 15) 00:07:38.357 5747.003 - 5772.209: 0.3479% ( 34) 00:07:38.357 5772.209 - 5797.415: 0.6159% ( 47) 00:07:38.357 5797.415 - 5822.622: 1.0322% ( 73) 00:07:38.357 5822.622 - 5847.828: 1.5283% ( 87) 00:07:38.357 5847.828 - 5873.034: 2.0757% ( 96) 00:07:38.357 5873.034 - 5898.240: 2.9824% ( 159) 00:07:38.357 5898.240 - 5923.446: 3.9861% ( 176) 00:07:38.357 5923.446 - 5948.652: 5.1437% ( 203) 00:07:38.357 5948.652 - 5973.858: 6.3926% ( 219) 00:07:38.357 5973.858 - 5999.065: 7.7840% ( 244) 00:07:38.357 5999.065 - 6024.271: 9.2381% ( 255) 00:07:38.357 6024.271 - 6049.477: 10.7721% ( 269) 00:07:38.357 6049.477 - 6074.683: 12.3574% ( 278) 00:07:38.357 6074.683 - 6099.889: 14.0169% ( 291) 00:07:38.357 6099.889 - 6125.095: 15.6250% ( 282) 00:07:38.357 6125.095 - 6150.302: 17.3244% ( 298) 00:07:38.357 6150.302 - 6175.508: 19.0009% ( 294) 00:07:38.357 6175.508 - 6200.714: 20.7516% ( 307) 00:07:38.357 6200.714 - 6225.920: 22.6106% ( 326) 00:07:38.357 6225.920 - 6251.126: 24.3727% ( 309) 00:07:38.357 6251.126 - 6276.332: 26.2146% ( 323) 00:07:38.357 6276.332 - 6301.538: 28.0338% ( 319) 00:07:38.357 6301.538 - 6326.745: 29.9213% ( 331) 00:07:38.357 6326.745 - 6351.951: 31.7632% ( 323) 00:07:38.357 6351.951 - 6377.157: 33.6850% ( 337) 00:07:38.357 6377.157 - 6402.363: 35.5782% ( 332) 00:07:38.357 6402.363 - 6427.569: 37.4829% ( 334) 00:07:38.357 6427.569 - 6452.775: 39.3647% ( 330) 00:07:38.357 6452.775 - 6503.188: 43.1398% ( 662) 00:07:38.357 6503.188 - 6553.600: 46.9776% ( 673) 00:07:38.357 6553.600 - 6604.012: 50.6558% ( 645) 00:07:38.357 6604.012 - 6654.425: 54.1629% ( 615) 00:07:38.357 6654.425 - 6704.837: 57.3392% ( 557) 00:07:38.357 6704.837 - 6755.249: 59.8255% ( 436) 00:07:38.357 6755.249 - 6805.662: 61.7587% ( 339) 00:07:38.357 6805.662 - 6856.074: 63.1387% ( 242) 00:07:38.357 6856.074 - 6906.486: 64.2108% ( 188) 00:07:38.357 6906.486 - 6956.898: 65.1175% ( 159) 00:07:38.357 6956.898 - 7007.311: 65.7904% ( 118) 00:07:38.357 7007.311 - 7057.723: 66.2922% ( 88) 00:07:38.357 7057.723 - 7108.135: 66.6686% ( 66) 00:07:38.357 7108.135 - 7158.548: 66.9708% ( 53) 00:07:38.357 7158.548 - 7208.960: 67.2046% ( 41) 00:07:38.357 7208.960 - 7259.372: 67.4327% ( 40) 00:07:38.357 7259.372 - 7309.785: 67.6551% ( 39) 00:07:38.357 7309.785 - 7360.197: 67.9802% ( 57) 00:07:38.357 7360.197 - 7410.609: 68.2539% ( 48) 00:07:38.357 7410.609 - 7461.022: 68.5447% ( 51) 00:07:38.357 7461.022 - 7511.434: 68.9325% ( 68) 00:07:38.357 7511.434 - 7561.846: 69.4457% ( 90) 00:07:38.357 7561.846 - 7612.258: 70.0730% ( 110) 00:07:38.357 7612.258 - 7662.671: 70.7801% ( 124) 00:07:38.357 7662.671 - 7713.083: 71.4986% ( 126) 00:07:38.357 7713.083 - 7763.495: 72.1943% ( 122) 00:07:38.357 7763.495 - 7813.908: 72.9186% ( 127) 00:07:38.357 7813.908 - 7864.320: 73.6827% ( 134) 00:07:38.357 7864.320 - 7914.732: 74.5495% ( 152) 00:07:38.357 7914.732 - 7965.145: 75.4505% ( 158) 00:07:38.357 7965.145 - 8015.557: 76.4313% ( 172) 00:07:38.357 8015.557 - 8065.969: 77.3723% ( 165) 00:07:38.357 8065.969 - 8116.382: 78.3303% ( 168) 00:07:38.357 8116.382 - 8166.794: 79.2598% ( 163) 00:07:38.357 8166.794 - 8217.206: 80.2007% ( 165) 00:07:38.357 8217.206 - 8267.618: 81.0789% ( 154) 00:07:38.357 8267.618 - 8318.031: 81.9343% ( 150) 00:07:38.357 8318.031 - 8368.443: 82.7954% ( 151) 00:07:38.357 8368.443 - 8418.855: 83.5823% ( 138) 00:07:38.357 8418.855 - 8469.268: 84.4035% ( 144) 00:07:38.357 8469.268 - 8519.680: 85.1962% ( 139) 00:07:38.357 8519.680 - 8570.092: 85.9318% ( 129) 00:07:38.357 8570.092 - 8620.505: 86.6731% ( 130) 00:07:38.357 8620.505 - 8670.917: 87.3859% ( 125) 00:07:38.357 8670.917 - 8721.329: 88.0132% ( 110) 00:07:38.357 8721.329 - 8771.742: 88.5664% ( 97) 00:07:38.357 8771.742 - 8822.154: 89.0454% ( 84) 00:07:38.357 8822.154 - 8872.566: 89.5130% ( 82) 00:07:38.357 8872.566 - 8922.978: 89.9806% ( 82) 00:07:38.357 8922.978 - 8973.391: 90.4254% ( 78) 00:07:38.357 8973.391 - 9023.803: 90.8930% ( 82) 00:07:38.357 9023.803 - 9074.215: 91.2523% ( 63) 00:07:38.357 9074.215 - 9124.628: 91.5716% ( 56) 00:07:38.357 9124.628 - 9175.040: 91.8225% ( 44) 00:07:38.357 9175.040 - 9225.452: 91.9822% ( 28) 00:07:38.357 9225.452 - 9275.865: 92.1419% ( 28) 00:07:38.357 9275.865 - 9326.277: 92.3244% ( 32) 00:07:38.357 9326.277 - 9376.689: 92.5182% ( 34) 00:07:38.357 9376.689 - 9427.102: 92.7521% ( 41) 00:07:38.357 9427.102 - 9477.514: 92.9973% ( 43) 00:07:38.357 9477.514 - 9527.926: 93.1797% ( 32) 00:07:38.357 9527.926 - 9578.338: 93.3907% ( 37) 00:07:38.357 9578.338 - 9628.751: 93.6302% ( 42) 00:07:38.357 9628.751 - 9679.163: 93.8641% ( 41) 00:07:38.357 9679.163 - 9729.575: 94.0750% ( 37) 00:07:38.357 9729.575 - 9779.988: 94.2974% ( 39) 00:07:38.357 9779.988 - 9830.400: 94.5312% ( 41) 00:07:38.357 9830.400 - 9880.812: 94.7365% ( 36) 00:07:38.357 9880.812 - 9931.225: 94.9475% ( 37) 00:07:38.357 9931.225 - 9981.637: 95.1756% ( 40) 00:07:38.357 9981.637 - 10032.049: 95.3866% ( 37) 00:07:38.357 10032.049 - 10082.462: 95.6033% ( 38) 00:07:38.357 10082.462 - 10132.874: 95.8086% ( 36) 00:07:38.357 10132.874 - 10183.286: 96.0424% ( 41) 00:07:38.357 10183.286 - 10233.698: 96.2534% ( 37) 00:07:38.357 10233.698 - 10284.111: 96.4416% ( 33) 00:07:38.357 10284.111 - 10334.523: 96.6127% ( 30) 00:07:38.357 10334.523 - 10384.935: 96.7724% ( 28) 00:07:38.357 10384.935 - 10435.348: 96.9377% ( 29) 00:07:38.357 10435.348 - 10485.760: 97.0803% ( 25) 00:07:38.357 10485.760 - 10536.172: 97.2057% ( 22) 00:07:38.357 10536.172 - 10586.585: 97.2970% ( 16) 00:07:38.357 10586.585 - 10636.997: 97.3996% ( 18) 00:07:38.357 10636.997 - 10687.409: 97.4852% ( 15) 00:07:38.357 10687.409 - 10737.822: 97.5650% ( 14) 00:07:38.357 10737.822 - 10788.234: 97.6334% ( 12) 00:07:38.357 10788.234 - 10838.646: 97.6848% ( 9) 00:07:38.357 10838.646 - 10889.058: 97.7304% ( 8) 00:07:38.357 10889.058 - 10939.471: 97.7760% ( 8) 00:07:38.357 10939.471 - 10989.883: 97.8330% ( 10) 00:07:38.357 10989.883 - 11040.295: 97.8844% ( 9) 00:07:38.357 11040.295 - 11090.708: 97.9300% ( 8) 00:07:38.357 11090.708 - 11141.120: 97.9870% ( 10) 00:07:38.357 11141.120 - 11191.532: 98.0326% ( 8) 00:07:38.357 11191.532 - 11241.945: 98.0782% ( 8) 00:07:38.357 11241.945 - 11292.357: 98.1524% ( 13) 00:07:38.357 11292.357 - 11342.769: 98.2208% ( 12) 00:07:38.357 11342.769 - 11393.182: 98.2892% ( 12) 00:07:38.357 11393.182 - 11443.594: 98.3634% ( 13) 00:07:38.357 11443.594 - 11494.006: 98.4147% ( 9) 00:07:38.357 11494.006 - 11544.418: 98.4603% ( 8) 00:07:38.357 11544.418 - 11594.831: 98.5173% ( 10) 00:07:38.357 11594.831 - 11645.243: 98.5687% ( 9) 00:07:38.357 11645.243 - 11695.655: 98.6257% ( 10) 00:07:38.357 11695.655 - 11746.068: 98.6998% ( 13) 00:07:38.357 11746.068 - 11796.480: 98.7625% ( 11) 00:07:38.357 11796.480 - 11846.892: 98.8253% ( 11) 00:07:38.357 11846.892 - 11897.305: 98.8766% ( 9) 00:07:38.357 11897.305 - 11947.717: 98.9108% ( 6) 00:07:38.357 11947.717 - 11998.129: 98.9507% ( 7) 00:07:38.357 11998.129 - 12048.542: 98.9792% ( 5) 00:07:38.357 12048.542 - 12098.954: 99.0135% ( 6) 00:07:38.357 12098.954 - 12149.366: 99.0306% ( 3) 00:07:38.357 12149.366 - 12199.778: 99.0420% ( 2) 00:07:38.357 12199.778 - 12250.191: 99.0591% ( 3) 00:07:38.357 12250.191 - 12300.603: 99.0705% ( 2) 00:07:38.357 12300.603 - 12351.015: 99.0876% ( 3) 00:07:38.357 12351.015 - 12401.428: 99.0990% ( 2) 00:07:38.357 12401.428 - 12451.840: 99.1104% ( 2) 00:07:38.357 12451.840 - 12502.252: 99.1218% ( 2) 00:07:38.357 12502.252 - 12552.665: 99.1332% ( 2) 00:07:38.357 12552.665 - 12603.077: 99.1503% ( 3) 00:07:38.357 12603.077 - 12653.489: 99.1617% ( 2) 00:07:38.357 12653.489 - 12703.902: 99.1674% ( 1) 00:07:38.357 12703.902 - 12754.314: 99.1845% ( 3) 00:07:38.357 12754.314 - 12804.726: 99.1959% ( 2) 00:07:38.357 12804.726 - 12855.138: 99.2073% ( 2) 00:07:38.357 12855.138 - 12905.551: 99.2245% ( 3) 00:07:38.357 12905.551 - 13006.375: 99.2530% ( 5) 00:07:38.357 13006.375 - 13107.200: 99.2701% ( 3) 00:07:38.357 21072.345 - 21173.169: 99.2758% ( 1) 00:07:38.357 21173.169 - 21273.994: 99.2986% ( 4) 00:07:38.357 21273.994 - 21374.818: 99.3157% ( 3) 00:07:38.357 21374.818 - 21475.643: 99.3271% ( 2) 00:07:38.357 21475.643 - 21576.468: 99.3442% ( 3) 00:07:38.357 21576.468 - 21677.292: 99.3613% ( 3) 00:07:38.357 21677.292 - 21778.117: 99.3784% ( 3) 00:07:38.357 21778.117 - 21878.942: 99.3898% ( 2) 00:07:38.357 21878.942 - 21979.766: 99.4069% ( 3) 00:07:38.357 21979.766 - 22080.591: 99.4240% ( 3) 00:07:38.357 22181.415 - 22282.240: 99.4354% ( 2) 00:07:38.357 22282.240 - 22383.065: 99.4526% ( 3) 00:07:38.357 22383.065 - 22483.889: 99.4697% ( 3) 00:07:38.357 22483.889 - 22584.714: 99.4868% ( 3) 00:07:38.357 22584.714 - 22685.538: 99.5039% ( 3) 00:07:38.357 22685.538 - 22786.363: 99.5210% ( 3) 00:07:38.357 22786.363 - 22887.188: 99.5324% ( 2) 00:07:38.357 22887.188 - 22988.012: 99.5495% ( 3) 00:07:38.357 22988.012 - 23088.837: 99.5666% ( 3) 00:07:38.357 23088.837 - 23189.662: 99.5780% ( 2) 00:07:38.357 23189.662 - 23290.486: 99.5951% ( 3) 00:07:38.357 23290.486 - 23391.311: 99.6122% ( 3) 00:07:38.357 23391.311 - 23492.135: 99.6293% ( 3) 00:07:38.357 23492.135 - 23592.960: 99.6350% ( 1) 00:07:38.357 26819.348 - 27020.997: 99.6407% ( 1) 00:07:38.357 27020.997 - 27222.646: 99.6750% ( 6) 00:07:38.357 27222.646 - 27424.295: 99.7092% ( 6) 00:07:38.357 27424.295 - 27625.945: 99.7434% ( 6) 00:07:38.357 27625.945 - 27827.594: 99.7719% ( 5) 00:07:38.357 27827.594 - 28029.243: 99.8004% ( 5) 00:07:38.357 28029.243 - 28230.892: 99.8346% ( 6) 00:07:38.357 28230.892 - 28432.542: 99.8688% ( 6) 00:07:38.357 28432.542 - 28634.191: 99.8974% ( 5) 00:07:38.358 28634.191 - 28835.840: 99.9316% ( 6) 00:07:38.358 28835.840 - 29037.489: 99.9601% ( 5) 00:07:38.358 29037.489 - 29239.138: 99.9943% ( 6) 00:07:38.358 29239.138 - 29440.788: 100.0000% ( 1) 00:07:38.358 00:07:38.358 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:38.358 ============================================================================== 00:07:38.358 Range in us Cumulative IO count 00:07:38.358 5671.385 - 5696.591: 0.0057% ( 1) 00:07:38.358 5696.591 - 5721.797: 0.0684% ( 11) 00:07:38.358 5721.797 - 5747.003: 0.1768% ( 19) 00:07:38.358 5747.003 - 5772.209: 0.3479% ( 30) 00:07:38.358 5772.209 - 5797.415: 0.5988% ( 44) 00:07:38.358 5797.415 - 5822.622: 1.0094% ( 72) 00:07:38.358 5822.622 - 5847.828: 1.4998% ( 86) 00:07:38.358 5847.828 - 5873.034: 2.1613% ( 116) 00:07:38.358 5873.034 - 5898.240: 2.9767% ( 143) 00:07:38.358 5898.240 - 5923.446: 3.9918% ( 178) 00:07:38.358 5923.446 - 5948.652: 5.1038% ( 195) 00:07:38.358 5948.652 - 5973.858: 6.4097% ( 229) 00:07:38.358 5973.858 - 5999.065: 7.7384% ( 233) 00:07:38.358 5999.065 - 6024.271: 9.2724% ( 269) 00:07:38.358 6024.271 - 6049.477: 10.8177% ( 271) 00:07:38.358 6049.477 - 6074.683: 12.3346% ( 266) 00:07:38.358 6074.683 - 6099.889: 13.8686% ( 269) 00:07:38.358 6099.889 - 6125.095: 15.5395% ( 293) 00:07:38.358 6125.095 - 6150.302: 17.3301% ( 314) 00:07:38.358 6150.302 - 6175.508: 19.0922% ( 309) 00:07:38.358 6175.508 - 6200.714: 20.8371% ( 306) 00:07:38.358 6200.714 - 6225.920: 22.6163% ( 312) 00:07:38.358 6225.920 - 6251.126: 24.4240% ( 317) 00:07:38.358 6251.126 - 6276.332: 26.3002% ( 329) 00:07:38.358 6276.332 - 6301.538: 28.0965% ( 315) 00:07:38.358 6301.538 - 6326.745: 29.9213% ( 320) 00:07:38.358 6326.745 - 6351.951: 31.7176% ( 315) 00:07:38.358 6351.951 - 6377.157: 33.6280% ( 335) 00:07:38.358 6377.157 - 6402.363: 35.5098% ( 330) 00:07:38.358 6402.363 - 6427.569: 37.3517% ( 323) 00:07:38.358 6427.569 - 6452.775: 39.2792% ( 338) 00:07:38.358 6452.775 - 6503.188: 43.1170% ( 673) 00:07:38.358 6503.188 - 6553.600: 46.9776% ( 677) 00:07:38.358 6553.600 - 6604.012: 50.6729% ( 648) 00:07:38.358 6604.012 - 6654.425: 54.1572% ( 611) 00:07:38.358 6654.425 - 6704.837: 57.2251% ( 538) 00:07:38.358 6704.837 - 6755.249: 59.7913% ( 450) 00:07:38.358 6755.249 - 6805.662: 61.7130% ( 337) 00:07:38.358 6805.662 - 6856.074: 63.1558% ( 253) 00:07:38.358 6856.074 - 6906.486: 64.2792% ( 197) 00:07:38.358 6906.486 - 6956.898: 65.2315% ( 167) 00:07:38.358 6956.898 - 7007.311: 65.8930% ( 116) 00:07:38.358 7007.311 - 7057.723: 66.3948% ( 88) 00:07:38.358 7057.723 - 7108.135: 66.7826% ( 68) 00:07:38.358 7108.135 - 7158.548: 67.1362% ( 62) 00:07:38.358 7158.548 - 7208.960: 67.4384% ( 53) 00:07:38.358 7208.960 - 7259.372: 67.7521% ( 55) 00:07:38.358 7259.372 - 7309.785: 68.0429% ( 51) 00:07:38.358 7309.785 - 7360.197: 68.2995% ( 45) 00:07:38.358 7360.197 - 7410.609: 68.5618% ( 46) 00:07:38.358 7410.609 - 7461.022: 68.8070% ( 43) 00:07:38.358 7461.022 - 7511.434: 69.1378% ( 58) 00:07:38.358 7511.434 - 7561.846: 69.6282% ( 86) 00:07:38.358 7561.846 - 7612.258: 70.2384% ( 107) 00:07:38.358 7612.258 - 7662.671: 70.9227% ( 120) 00:07:38.358 7662.671 - 7713.083: 71.5899% ( 117) 00:07:38.358 7713.083 - 7763.495: 72.2514% ( 116) 00:07:38.358 7763.495 - 7813.908: 72.9756% ( 127) 00:07:38.358 7813.908 - 7864.320: 73.7397% ( 134) 00:07:38.358 7864.320 - 7914.732: 74.5666% ( 145) 00:07:38.358 7914.732 - 7965.145: 75.4163% ( 149) 00:07:38.358 7965.145 - 8015.557: 76.3059% ( 156) 00:07:38.358 8015.557 - 8065.969: 77.2126% ( 159) 00:07:38.358 8065.969 - 8116.382: 78.1421% ( 163) 00:07:38.358 8116.382 - 8166.794: 79.0431% ( 158) 00:07:38.358 8166.794 - 8217.206: 79.9099% ( 152) 00:07:38.358 8217.206 - 8267.618: 80.8166% ( 159) 00:07:38.358 8267.618 - 8318.031: 81.7176% ( 158) 00:07:38.358 8318.031 - 8368.443: 82.6129% ( 157) 00:07:38.358 8368.443 - 8418.855: 83.4854% ( 153) 00:07:38.358 8418.855 - 8469.268: 84.3408% ( 150) 00:07:38.358 8469.268 - 8519.680: 85.1505% ( 142) 00:07:38.358 8519.680 - 8570.092: 85.9375% ( 138) 00:07:38.358 8570.092 - 8620.505: 86.7016% ( 134) 00:07:38.358 8620.505 - 8670.917: 87.3802% ( 119) 00:07:38.358 8670.917 - 8721.329: 88.0132% ( 111) 00:07:38.358 8721.329 - 8771.742: 88.5036% ( 86) 00:07:38.358 8771.742 - 8822.154: 88.9599% ( 80) 00:07:38.358 8822.154 - 8872.566: 89.4332% ( 83) 00:07:38.358 8872.566 - 8922.978: 89.8438% ( 72) 00:07:38.358 8922.978 - 8973.391: 90.2543% ( 72) 00:07:38.358 8973.391 - 9023.803: 90.6649% ( 72) 00:07:38.358 9023.803 - 9074.215: 90.9900% ( 57) 00:07:38.358 9074.215 - 9124.628: 91.2808% ( 51) 00:07:38.358 9124.628 - 9175.040: 91.5431% ( 46) 00:07:38.358 9175.040 - 9225.452: 91.7712% ( 40) 00:07:38.358 9225.452 - 9275.865: 91.9594% ( 33) 00:07:38.358 9275.865 - 9326.277: 92.1590% ( 35) 00:07:38.358 9326.277 - 9376.689: 92.3757% ( 38) 00:07:38.358 9376.689 - 9427.102: 92.5924% ( 38) 00:07:38.358 9427.102 - 9477.514: 92.7692% ( 31) 00:07:38.358 9477.514 - 9527.926: 93.0201% ( 44) 00:07:38.358 9527.926 - 9578.338: 93.2482% ( 40) 00:07:38.358 9578.338 - 9628.751: 93.4763% ( 40) 00:07:38.358 9628.751 - 9679.163: 93.6930% ( 38) 00:07:38.358 9679.163 - 9729.575: 93.9268% ( 41) 00:07:38.358 9729.575 - 9779.988: 94.1549% ( 40) 00:07:38.358 9779.988 - 9830.400: 94.4172% ( 46) 00:07:38.358 9830.400 - 9880.812: 94.6510% ( 41) 00:07:38.358 9880.812 - 9931.225: 94.8734% ( 39) 00:07:38.358 9931.225 - 9981.637: 95.1072% ( 41) 00:07:38.358 9981.637 - 10032.049: 95.3125% ( 36) 00:07:38.358 10032.049 - 10082.462: 95.5007% ( 33) 00:07:38.358 10082.462 - 10132.874: 95.6718% ( 30) 00:07:38.358 10132.874 - 10183.286: 95.8542% ( 32) 00:07:38.358 10183.286 - 10233.698: 96.0139% ( 28) 00:07:38.358 10233.698 - 10284.111: 96.1337% ( 21) 00:07:38.358 10284.111 - 10334.523: 96.2933% ( 28) 00:07:38.358 10334.523 - 10384.935: 96.4473% ( 27) 00:07:38.358 10384.935 - 10435.348: 96.5956% ( 26) 00:07:38.358 10435.348 - 10485.760: 96.7495% ( 27) 00:07:38.358 10485.760 - 10536.172: 96.9035% ( 27) 00:07:38.358 10536.172 - 10586.585: 97.0518% ( 26) 00:07:38.358 10586.585 - 10636.997: 97.2172% ( 29) 00:07:38.358 10636.997 - 10687.409: 97.3597% ( 25) 00:07:38.358 10687.409 - 10737.822: 97.4852% ( 22) 00:07:38.358 10737.822 - 10788.234: 97.6106% ( 22) 00:07:38.358 10788.234 - 10838.646: 97.7076% ( 17) 00:07:38.358 10838.646 - 10889.058: 97.8159% ( 19) 00:07:38.358 10889.058 - 10939.471: 97.9072% ( 16) 00:07:38.358 10939.471 - 10989.883: 97.9756% ( 12) 00:07:38.358 10989.883 - 11040.295: 98.0383% ( 11) 00:07:38.358 11040.295 - 11090.708: 98.0839% ( 8) 00:07:38.358 11090.708 - 11141.120: 98.1410% ( 10) 00:07:38.358 11141.120 - 11191.532: 98.2094% ( 12) 00:07:38.358 11191.532 - 11241.945: 98.2664% ( 10) 00:07:38.358 11241.945 - 11292.357: 98.3177% ( 9) 00:07:38.358 11292.357 - 11342.769: 98.3862% ( 12) 00:07:38.358 11342.769 - 11393.182: 98.4660% ( 14) 00:07:38.358 11393.182 - 11443.594: 98.5401% ( 13) 00:07:38.358 11443.594 - 11494.006: 98.6143% ( 13) 00:07:38.358 11494.006 - 11544.418: 98.6770% ( 11) 00:07:38.358 11544.418 - 11594.831: 98.7568% ( 14) 00:07:38.358 11594.831 - 11645.243: 98.8310% ( 13) 00:07:38.358 11645.243 - 11695.655: 98.8766% ( 8) 00:07:38.358 11695.655 - 11746.068: 98.9108% ( 6) 00:07:38.358 11746.068 - 11796.480: 98.9507% ( 7) 00:07:38.358 11796.480 - 11846.892: 98.9792% ( 5) 00:07:38.358 11846.892 - 11897.305: 99.0135% ( 6) 00:07:38.358 11897.305 - 11947.717: 99.0534% ( 7) 00:07:38.358 11947.717 - 11998.129: 99.0819% ( 5) 00:07:38.358 11998.129 - 12048.542: 99.0933% ( 2) 00:07:38.358 12048.542 - 12098.954: 99.1047% ( 2) 00:07:38.358 12098.954 - 12149.366: 99.1104% ( 1) 00:07:38.358 12149.366 - 12199.778: 99.1218% ( 2) 00:07:38.358 12199.778 - 12250.191: 99.1275% ( 1) 00:07:38.358 12250.191 - 12300.603: 99.1389% ( 2) 00:07:38.358 12300.603 - 12351.015: 99.1446% ( 1) 00:07:38.358 12351.015 - 12401.428: 99.1560% ( 2) 00:07:38.358 12401.428 - 12451.840: 99.1617% ( 1) 00:07:38.358 12451.840 - 12502.252: 99.1674% ( 1) 00:07:38.358 12502.252 - 12552.665: 99.1731% ( 1) 00:07:38.358 12552.665 - 12603.077: 99.1845% ( 2) 00:07:38.358 12603.077 - 12653.489: 99.1902% ( 1) 00:07:38.358 12653.489 - 12703.902: 99.2016% ( 2) 00:07:38.358 12703.902 - 12754.314: 99.2073% ( 1) 00:07:38.358 12754.314 - 12804.726: 99.2188% ( 2) 00:07:38.358 12804.726 - 12855.138: 99.2245% ( 1) 00:07:38.358 12855.138 - 12905.551: 99.2302% ( 1) 00:07:38.358 12905.551 - 13006.375: 99.2473% ( 3) 00:07:38.358 13006.375 - 13107.200: 99.2644% ( 3) 00:07:38.358 13107.200 - 13208.025: 99.2701% ( 1) 00:07:38.358 18249.255 - 18350.080: 99.2815% ( 2) 00:07:38.358 18350.080 - 18450.905: 99.2986% ( 3) 00:07:38.358 18450.905 - 18551.729: 99.3100% ( 2) 00:07:38.358 18551.729 - 18652.554: 99.3271% ( 3) 00:07:38.358 18652.554 - 18753.378: 99.3442% ( 3) 00:07:38.358 18753.378 - 18854.203: 99.3613% ( 3) 00:07:38.358 18854.203 - 18955.028: 99.3784% ( 3) 00:07:38.358 18955.028 - 19055.852: 99.3955% ( 3) 00:07:38.358 19055.852 - 19156.677: 99.4126% ( 3) 00:07:38.358 19156.677 - 19257.502: 99.4297% ( 3) 00:07:38.358 19257.502 - 19358.326: 99.4411% ( 2) 00:07:38.358 19358.326 - 19459.151: 99.4583% ( 3) 00:07:38.358 19459.151 - 19559.975: 99.4754% ( 3) 00:07:38.358 19559.975 - 19660.800: 99.4925% ( 3) 00:07:38.358 19660.800 - 19761.625: 99.5096% ( 3) 00:07:38.358 19761.625 - 19862.449: 99.5210% ( 2) 00:07:38.358 19862.449 - 19963.274: 99.5381% ( 3) 00:07:38.358 19963.274 - 20064.098: 99.5552% ( 3) 00:07:38.358 20064.098 - 20164.923: 99.5723% ( 3) 00:07:38.358 20164.923 - 20265.748: 99.5894% ( 3) 00:07:38.358 20265.748 - 20366.572: 99.6065% ( 3) 00:07:38.358 20366.572 - 20467.397: 99.6179% ( 2) 00:07:38.358 20467.397 - 20568.222: 99.6350% ( 3) 00:07:38.358 23996.258 - 24097.083: 99.6464% ( 2) 00:07:38.358 24097.083 - 24197.908: 99.6635% ( 3) 00:07:38.358 24197.908 - 24298.732: 99.6807% ( 3) 00:07:38.358 24298.732 - 24399.557: 99.6978% ( 3) 00:07:38.358 24399.557 - 24500.382: 99.7092% ( 2) 00:07:38.358 24500.382 - 24601.206: 99.7263% ( 3) 00:07:38.358 24601.206 - 24702.031: 99.7377% ( 2) 00:07:38.358 24702.031 - 24802.855: 99.7548% ( 3) 00:07:38.358 24802.855 - 24903.680: 99.7662% ( 2) 00:07:38.358 24903.680 - 25004.505: 99.7833% ( 3) 00:07:38.358 25004.505 - 25105.329: 99.8004% ( 3) 00:07:38.358 25105.329 - 25206.154: 99.8175% ( 3) 00:07:38.358 25206.154 - 25306.978: 99.8346% ( 3) 00:07:38.358 25306.978 - 25407.803: 99.8517% ( 3) 00:07:38.358 25407.803 - 25508.628: 99.8688% ( 3) 00:07:38.358 25508.628 - 25609.452: 99.8802% ( 2) 00:07:38.358 25609.452 - 25710.277: 99.8974% ( 3) 00:07:38.358 25710.277 - 25811.102: 99.9145% ( 3) 00:07:38.358 25811.102 - 26012.751: 99.9373% ( 4) 00:07:38.358 26012.751 - 26214.400: 99.9715% ( 6) 00:07:38.358 26214.400 - 26416.049: 100.0000% ( 5) 00:07:38.358 00:07:38.358 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:38.358 ============================================================================== 00:07:38.358 Range in us Cumulative IO count 00:07:38.358 5671.385 - 5696.591: 0.0057% ( 1) 00:07:38.358 5696.591 - 5721.797: 0.0513% ( 8) 00:07:38.358 5721.797 - 5747.003: 0.1597% ( 19) 00:07:38.358 5747.003 - 5772.209: 0.3422% ( 32) 00:07:38.358 5772.209 - 5797.415: 0.6615% ( 56) 00:07:38.358 5797.415 - 5822.622: 1.0436% ( 67) 00:07:38.358 5822.622 - 5847.828: 1.5283% ( 85) 00:07:38.358 5847.828 - 5873.034: 2.1784% ( 114) 00:07:38.358 5873.034 - 5898.240: 3.0737% ( 157) 00:07:38.358 5898.240 - 5923.446: 4.0431% ( 170) 00:07:38.358 5923.446 - 5948.652: 5.1494% ( 194) 00:07:38.358 5948.652 - 5973.858: 6.3869% ( 217) 00:07:38.358 5973.858 - 5999.065: 7.6813% ( 227) 00:07:38.358 5999.065 - 6024.271: 9.0614% ( 242) 00:07:38.358 6024.271 - 6049.477: 10.5953% ( 269) 00:07:38.358 6049.477 - 6074.683: 12.1407% ( 271) 00:07:38.358 6074.683 - 6099.889: 13.7660% ( 285) 00:07:38.358 6099.889 - 6125.095: 15.4311% ( 292) 00:07:38.358 6125.095 - 6150.302: 17.1932% ( 309) 00:07:38.358 6150.302 - 6175.508: 18.8869% ( 297) 00:07:38.358 6175.508 - 6200.714: 20.6718% ( 313) 00:07:38.358 6200.714 - 6225.920: 22.5194% ( 324) 00:07:38.358 6225.920 - 6251.126: 24.3670% ( 324) 00:07:38.358 6251.126 - 6276.332: 26.2603% ( 332) 00:07:38.358 6276.332 - 6301.538: 28.0737% ( 318) 00:07:38.358 6301.538 - 6326.745: 29.9042% ( 321) 00:07:38.358 6326.745 - 6351.951: 31.7290% ( 320) 00:07:38.358 6351.951 - 6377.157: 33.6052% ( 329) 00:07:38.359 6377.157 - 6402.363: 35.4357% ( 321) 00:07:38.359 6402.363 - 6427.569: 37.3289% ( 332) 00:07:38.359 6427.569 - 6452.775: 39.2165% ( 331) 00:07:38.359 6452.775 - 6503.188: 43.0429% ( 671) 00:07:38.359 6503.188 - 6553.600: 46.9548% ( 686) 00:07:38.359 6553.600 - 6604.012: 50.6786% ( 653) 00:07:38.359 6604.012 - 6654.425: 54.1058% ( 601) 00:07:38.359 6654.425 - 6704.837: 57.2080% ( 544) 00:07:38.359 6704.837 - 6755.249: 59.8996% ( 472) 00:07:38.359 6755.249 - 6805.662: 61.9469% ( 359) 00:07:38.359 6805.662 - 6856.074: 63.5208% ( 276) 00:07:38.359 6856.074 - 6906.486: 64.8266% ( 229) 00:07:38.359 6906.486 - 6956.898: 65.7904% ( 169) 00:07:38.359 6956.898 - 7007.311: 66.5203% ( 128) 00:07:38.359 7007.311 - 7057.723: 67.0963% ( 101) 00:07:38.359 7057.723 - 7108.135: 67.5753% ( 84) 00:07:38.359 7108.135 - 7158.548: 67.9402% ( 64) 00:07:38.359 7158.548 - 7208.960: 68.2539% ( 55) 00:07:38.359 7208.960 - 7259.372: 68.5162% ( 46) 00:07:38.359 7259.372 - 7309.785: 68.7614% ( 43) 00:07:38.359 7309.785 - 7360.197: 68.9952% ( 41) 00:07:38.359 7360.197 - 7410.609: 69.1948% ( 35) 00:07:38.359 7410.609 - 7461.022: 69.4001% ( 36) 00:07:38.359 7461.022 - 7511.434: 69.6282% ( 40) 00:07:38.359 7511.434 - 7561.846: 69.9589% ( 58) 00:07:38.359 7561.846 - 7612.258: 70.3809% ( 74) 00:07:38.359 7612.258 - 7662.671: 70.9398% ( 98) 00:07:38.359 7662.671 - 7713.083: 71.4929% ( 97) 00:07:38.359 7713.083 - 7763.495: 72.0632% ( 100) 00:07:38.359 7763.495 - 7813.908: 72.6677% ( 106) 00:07:38.359 7813.908 - 7864.320: 73.3748% ( 124) 00:07:38.359 7864.320 - 7914.732: 74.1275% ( 132) 00:07:38.359 7914.732 - 7965.145: 74.9487% ( 144) 00:07:38.359 7965.145 - 8015.557: 75.8554% ( 159) 00:07:38.359 8015.557 - 8065.969: 76.7336% ( 154) 00:07:38.359 8065.969 - 8116.382: 77.6403% ( 159) 00:07:38.359 8116.382 - 8166.794: 78.5299% ( 156) 00:07:38.359 8166.794 - 8217.206: 79.5107% ( 172) 00:07:38.359 8217.206 - 8267.618: 80.4003% ( 156) 00:07:38.359 8267.618 - 8318.031: 81.2956% ( 157) 00:07:38.359 8318.031 - 8368.443: 82.1852% ( 156) 00:07:38.359 8368.443 - 8418.855: 83.0463% ( 151) 00:07:38.359 8418.855 - 8469.268: 83.9359% ( 156) 00:07:38.359 8469.268 - 8519.680: 84.7742% ( 147) 00:07:38.359 8519.680 - 8570.092: 85.6010% ( 145) 00:07:38.359 8570.092 - 8620.505: 86.3937% ( 139) 00:07:38.359 8620.505 - 8670.917: 87.2092% ( 143) 00:07:38.359 8670.917 - 8721.329: 87.8764% ( 117) 00:07:38.359 8721.329 - 8771.742: 88.4865% ( 107) 00:07:38.359 8771.742 - 8822.154: 89.0340% ( 96) 00:07:38.359 8822.154 - 8872.566: 89.5472% ( 90) 00:07:38.359 8872.566 - 8922.978: 90.0719% ( 92) 00:07:38.359 8922.978 - 8973.391: 90.5566% ( 85) 00:07:38.359 8973.391 - 9023.803: 91.0698% ( 90) 00:07:38.359 9023.803 - 9074.215: 91.4576% ( 68) 00:07:38.359 9074.215 - 9124.628: 91.7883% ( 58) 00:07:38.359 9124.628 - 9175.040: 92.0849% ( 52) 00:07:38.359 9175.040 - 9225.452: 92.2901% ( 36) 00:07:38.359 9225.452 - 9275.865: 92.5068% ( 38) 00:07:38.359 9275.865 - 9326.277: 92.6836% ( 31) 00:07:38.359 9326.277 - 9376.689: 92.8946% ( 37) 00:07:38.359 9376.689 - 9427.102: 93.0771% ( 32) 00:07:38.359 9427.102 - 9477.514: 93.2710% ( 34) 00:07:38.359 9477.514 - 9527.926: 93.4763% ( 36) 00:07:38.359 9527.926 - 9578.338: 93.6417% ( 29) 00:07:38.359 9578.338 - 9628.751: 93.8241% ( 32) 00:07:38.359 9628.751 - 9679.163: 94.0009% ( 31) 00:07:38.359 9679.163 - 9729.575: 94.1834% ( 32) 00:07:38.359 9729.575 - 9779.988: 94.3944% ( 37) 00:07:38.359 9779.988 - 9830.400: 94.5370% ( 25) 00:07:38.359 9830.400 - 9880.812: 94.7137% ( 31) 00:07:38.359 9880.812 - 9931.225: 94.8791% ( 29) 00:07:38.359 9931.225 - 9981.637: 95.0502% ( 30) 00:07:38.359 9981.637 - 10032.049: 95.1927% ( 25) 00:07:38.359 10032.049 - 10082.462: 95.3353% ( 25) 00:07:38.359 10082.462 - 10132.874: 95.5178% ( 32) 00:07:38.359 10132.874 - 10183.286: 95.6775% ( 28) 00:07:38.359 10183.286 - 10233.698: 95.8428% ( 29) 00:07:38.359 10233.698 - 10284.111: 95.9911% ( 26) 00:07:38.359 10284.111 - 10334.523: 96.1394% ( 26) 00:07:38.359 10334.523 - 10384.935: 96.2819% ( 25) 00:07:38.359 10384.935 - 10435.348: 96.4302% ( 26) 00:07:38.359 10435.348 - 10485.760: 96.5671% ( 24) 00:07:38.359 10485.760 - 10536.172: 96.7039% ( 24) 00:07:38.359 10536.172 - 10586.585: 96.8465% ( 25) 00:07:38.359 10586.585 - 10636.997: 96.9891% ( 25) 00:07:38.359 10636.997 - 10687.409: 97.1145% ( 22) 00:07:38.359 10687.409 - 10737.822: 97.2514% ( 24) 00:07:38.359 10737.822 - 10788.234: 97.3711% ( 21) 00:07:38.359 10788.234 - 10838.646: 97.5194% ( 26) 00:07:38.359 10838.646 - 10889.058: 97.6505% ( 23) 00:07:38.359 10889.058 - 10939.471: 97.7589% ( 19) 00:07:38.359 10939.471 - 10989.883: 97.8387% ( 14) 00:07:38.359 10989.883 - 11040.295: 97.9414% ( 18) 00:07:38.359 11040.295 - 11090.708: 98.0098% ( 12) 00:07:38.359 11090.708 - 11141.120: 98.0725% ( 11) 00:07:38.359 11141.120 - 11191.532: 98.1182% ( 8) 00:07:38.359 11191.532 - 11241.945: 98.1581% ( 7) 00:07:38.359 11241.945 - 11292.357: 98.2037% ( 8) 00:07:38.359 11292.357 - 11342.769: 98.2322% ( 5) 00:07:38.359 11342.769 - 11393.182: 98.2892% ( 10) 00:07:38.359 11393.182 - 11443.594: 98.3520% ( 11) 00:07:38.359 11443.594 - 11494.006: 98.4033% ( 9) 00:07:38.359 11494.006 - 11544.418: 98.4489% ( 8) 00:07:38.359 11544.418 - 11594.831: 98.4774% ( 5) 00:07:38.359 11594.831 - 11645.243: 98.5116% ( 6) 00:07:38.359 11645.243 - 11695.655: 98.5516% ( 7) 00:07:38.359 11695.655 - 11746.068: 98.5801% ( 5) 00:07:38.359 11746.068 - 11796.480: 98.6143% ( 6) 00:07:38.359 11796.480 - 11846.892: 98.6485% ( 6) 00:07:38.359 11846.892 - 11897.305: 98.6941% ( 8) 00:07:38.359 11897.305 - 11947.717: 98.7454% ( 9) 00:07:38.359 11947.717 - 11998.129: 98.7911% ( 8) 00:07:38.359 11998.129 - 12048.542: 98.8367% ( 8) 00:07:38.359 12048.542 - 12098.954: 98.8880% ( 9) 00:07:38.359 12098.954 - 12149.366: 98.9336% ( 8) 00:07:38.359 12149.366 - 12199.778: 98.9792% ( 8) 00:07:38.359 12199.778 - 12250.191: 99.0078% ( 5) 00:07:38.359 12250.191 - 12300.603: 99.0192% ( 2) 00:07:38.359 12300.603 - 12351.015: 99.0306% ( 2) 00:07:38.359 12351.015 - 12401.428: 99.0477% ( 3) 00:07:38.359 12401.428 - 12451.840: 99.0591% ( 2) 00:07:38.359 12451.840 - 12502.252: 99.0762% ( 3) 00:07:38.359 12502.252 - 12552.665: 99.0876% ( 2) 00:07:38.359 12552.665 - 12603.077: 99.0990% ( 2) 00:07:38.359 12603.077 - 12653.489: 99.1161% ( 3) 00:07:38.359 12653.489 - 12703.902: 99.1275% ( 2) 00:07:38.359 12703.902 - 12754.314: 99.1389% ( 2) 00:07:38.359 12754.314 - 12804.726: 99.1560% ( 3) 00:07:38.359 12804.726 - 12855.138: 99.1674% ( 2) 00:07:38.359 12855.138 - 12905.551: 99.1788% ( 2) 00:07:38.359 12905.551 - 13006.375: 99.2073% ( 5) 00:07:38.359 13006.375 - 13107.200: 99.2302% ( 4) 00:07:38.359 13107.200 - 13208.025: 99.2587% ( 5) 00:07:38.359 13208.025 - 13308.849: 99.2701% ( 2) 00:07:38.359 15426.166 - 15526.991: 99.2758% ( 1) 00:07:38.359 15526.991 - 15627.815: 99.2929% ( 3) 00:07:38.359 15627.815 - 15728.640: 99.3100% ( 3) 00:07:38.359 15728.640 - 15829.465: 99.3214% ( 2) 00:07:38.359 15829.465 - 15930.289: 99.3328% ( 2) 00:07:38.359 15930.289 - 16031.114: 99.3613% ( 5) 00:07:38.359 16031.114 - 16131.938: 99.3727% ( 2) 00:07:38.359 16131.938 - 16232.763: 99.3898% ( 3) 00:07:38.359 16232.763 - 16333.588: 99.4069% ( 3) 00:07:38.359 16333.588 - 16434.412: 99.4240% ( 3) 00:07:38.359 16434.412 - 16535.237: 99.4354% ( 2) 00:07:38.359 16535.237 - 16636.062: 99.4526% ( 3) 00:07:38.359 16636.062 - 16736.886: 99.4697% ( 3) 00:07:38.359 16736.886 - 16837.711: 99.4868% ( 3) 00:07:38.359 16837.711 - 16938.535: 99.5039% ( 3) 00:07:38.359 16938.535 - 17039.360: 99.5210% ( 3) 00:07:38.359 17039.360 - 17140.185: 99.5324% ( 2) 00:07:38.359 17140.185 - 17241.009: 99.5495% ( 3) 00:07:38.359 17241.009 - 17341.834: 99.5666% ( 3) 00:07:38.359 17341.834 - 17442.658: 99.5780% ( 2) 00:07:38.359 17442.658 - 17543.483: 99.5951% ( 3) 00:07:38.359 17543.483 - 17644.308: 99.6122% ( 3) 00:07:38.359 17644.308 - 17745.132: 99.6293% ( 3) 00:07:38.359 17745.132 - 17845.957: 99.6350% ( 1) 00:07:38.359 21173.169 - 21273.994: 99.6407% ( 1) 00:07:38.359 21273.994 - 21374.818: 99.6578% ( 3) 00:07:38.359 21374.818 - 21475.643: 99.6693% ( 2) 00:07:38.359 21475.643 - 21576.468: 99.6864% ( 3) 00:07:38.359 21576.468 - 21677.292: 99.7035% ( 3) 00:07:38.359 21677.292 - 21778.117: 99.7206% ( 3) 00:07:38.359 21778.117 - 21878.942: 99.7377% ( 3) 00:07:38.359 21878.942 - 21979.766: 99.7548% ( 3) 00:07:38.359 21979.766 - 22080.591: 99.7662% ( 2) 00:07:38.359 22080.591 - 22181.415: 99.7833% ( 3) 00:07:38.359 22181.415 - 22282.240: 99.7947% ( 2) 00:07:38.359 22282.240 - 22383.065: 99.8118% ( 3) 00:07:38.359 22383.065 - 22483.889: 99.8289% ( 3) 00:07:38.359 22483.889 - 22584.714: 99.8460% ( 3) 00:07:38.359 22584.714 - 22685.538: 99.8631% ( 3) 00:07:38.359 22685.538 - 22786.363: 99.8745% ( 2) 00:07:38.359 22786.363 - 22887.188: 99.8917% ( 3) 00:07:38.359 22887.188 - 22988.012: 99.9088% ( 3) 00:07:38.359 22988.012 - 23088.837: 99.9259% ( 3) 00:07:38.359 23088.837 - 23189.662: 99.9430% ( 3) 00:07:38.359 23189.662 - 23290.486: 99.9601% ( 3) 00:07:38.359 23290.486 - 23391.311: 99.9772% ( 3) 00:07:38.359 23391.311 - 23492.135: 99.9886% ( 2) 00:07:38.359 23492.135 - 23592.960: 100.0000% ( 2) 00:07:38.359 00:07:38.359 12:05:39 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:07:39.735 Initializing NVMe Controllers 00:07:39.735 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:39.735 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:39.735 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:39.735 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:39.735 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:39.735 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:39.735 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:39.735 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:39.735 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:39.735 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:39.735 Initialization complete. Launching workers. 00:07:39.735 ======================================================== 00:07:39.735 Latency(us) 00:07:39.735 Device Information : IOPS MiB/s Average min max 00:07:39.735 PCIE (0000:00:10.0) NSID 1 from core 0: 15159.12 177.65 8450.70 6314.29 31780.63 00:07:39.735 PCIE (0000:00:11.0) NSID 1 from core 0: 15159.12 177.65 8436.02 6331.83 29868.72 00:07:39.735 PCIE (0000:00:13.0) NSID 1 from core 0: 15159.12 177.65 8417.40 6374.58 29714.59 00:07:39.735 PCIE (0000:00:12.0) NSID 1 from core 0: 15159.12 177.65 8400.35 6409.39 27522.34 00:07:39.735 PCIE (0000:00:12.0) NSID 2 from core 0: 15159.12 177.65 8386.04 6283.35 26103.90 00:07:39.735 PCIE (0000:00:12.0) NSID 3 from core 0: 15159.12 177.65 8371.92 6339.84 24543.51 00:07:39.735 ======================================================== 00:07:39.735 Total : 90954.70 1065.88 8410.40 6283.35 31780.63 00:07:39.735 00:07:39.735 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:39.735 ================================================================================= 00:07:39.735 1.00000% : 6553.600us 00:07:39.735 10.00000% : 7007.311us 00:07:39.735 25.00000% : 7410.609us 00:07:39.735 50.00000% : 8116.382us 00:07:39.735 75.00000% : 8973.391us 00:07:39.735 90.00000% : 10082.462us 00:07:39.735 95.00000% : 10737.822us 00:07:39.735 98.00000% : 11846.892us 00:07:39.735 99.00000% : 12754.314us 00:07:39.735 99.50000% : 25206.154us 00:07:39.735 99.90000% : 31457.280us 00:07:39.735 99.99000% : 31860.578us 00:07:39.735 99.99900% : 31860.578us 00:07:39.735 99.99990% : 31860.578us 00:07:39.735 99.99999% : 31860.578us 00:07:39.735 00:07:39.735 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:39.735 ================================================================================= 00:07:39.735 1.00000% : 6755.249us 00:07:39.735 10.00000% : 7057.723us 00:07:39.735 25.00000% : 7410.609us 00:07:39.735 50.00000% : 8065.969us 00:07:39.735 75.00000% : 8973.391us 00:07:39.735 90.00000% : 10082.462us 00:07:39.735 95.00000% : 10586.585us 00:07:39.735 98.00000% : 11594.831us 00:07:39.735 99.00000% : 12603.077us 00:07:39.735 99.50000% : 24399.557us 00:07:39.735 99.90000% : 29642.437us 00:07:39.735 99.99000% : 29844.086us 00:07:39.735 99.99900% : 30045.735us 00:07:39.735 99.99990% : 30045.735us 00:07:39.735 99.99999% : 30045.735us 00:07:39.735 00:07:39.735 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:39.735 ================================================================================= 00:07:39.735 1.00000% : 6704.837us 00:07:39.735 10.00000% : 7057.723us 00:07:39.735 25.00000% : 7410.609us 00:07:39.735 50.00000% : 8015.557us 00:07:39.735 75.00000% : 8973.391us 00:07:39.735 90.00000% : 10082.462us 00:07:39.735 95.00000% : 10485.760us 00:07:39.735 98.00000% : 11897.305us 00:07:39.735 99.00000% : 12703.902us 00:07:39.735 99.50000% : 23592.960us 00:07:39.735 99.90000% : 29440.788us 00:07:39.735 99.99000% : 29844.086us 00:07:39.735 99.99900% : 29844.086us 00:07:39.735 99.99990% : 29844.086us 00:07:39.735 99.99999% : 29844.086us 00:07:39.735 00:07:39.735 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:39.735 ================================================================================= 00:07:39.735 1.00000% : 6704.837us 00:07:39.735 10.00000% : 7057.723us 00:07:39.735 25.00000% : 7410.609us 00:07:39.735 50.00000% : 8065.969us 00:07:39.735 75.00000% : 8973.391us 00:07:39.735 90.00000% : 10032.049us 00:07:39.735 95.00000% : 10485.760us 00:07:39.735 98.00000% : 11897.305us 00:07:39.735 99.00000% : 12855.138us 00:07:39.735 99.50000% : 20669.046us 00:07:39.735 99.90000% : 27222.646us 00:07:39.735 99.99000% : 27625.945us 00:07:39.735 99.99900% : 27625.945us 00:07:39.735 99.99990% : 27625.945us 00:07:39.735 99.99999% : 27625.945us 00:07:39.735 00:07:39.735 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:39.735 ================================================================================= 00:07:39.735 1.00000% : 6654.425us 00:07:39.735 10.00000% : 7057.723us 00:07:39.735 25.00000% : 7410.609us 00:07:39.735 50.00000% : 8065.969us 00:07:39.735 75.00000% : 8973.391us 00:07:39.735 90.00000% : 10082.462us 00:07:39.735 95.00000% : 10536.172us 00:07:39.735 98.00000% : 11796.480us 00:07:39.735 99.00000% : 12703.902us 00:07:39.735 99.50000% : 19257.502us 00:07:39.735 99.90000% : 25710.277us 00:07:39.735 99.99000% : 26214.400us 00:07:39.735 99.99900% : 26214.400us 00:07:39.735 99.99990% : 26214.400us 00:07:39.735 99.99999% : 26214.400us 00:07:39.735 00:07:39.735 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:39.735 ================================================================================= 00:07:39.735 1.00000% : 6654.425us 00:07:39.735 10.00000% : 7057.723us 00:07:39.735 25.00000% : 7410.609us 00:07:39.735 50.00000% : 8065.969us 00:07:39.735 75.00000% : 8922.978us 00:07:39.735 90.00000% : 10032.049us 00:07:39.735 95.00000% : 10586.585us 00:07:39.735 98.00000% : 11998.129us 00:07:39.735 99.00000% : 12502.252us 00:07:39.735 99.50000% : 17341.834us 00:07:39.735 99.90000% : 24197.908us 00:07:39.735 99.99000% : 24601.206us 00:07:39.736 99.99900% : 24601.206us 00:07:39.736 99.99990% : 24601.206us 00:07:39.736 99.99999% : 24601.206us 00:07:39.736 00:07:39.736 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:39.736 ============================================================================== 00:07:39.736 Range in us Cumulative IO count 00:07:39.736 6301.538 - 6326.745: 0.0066% ( 1) 00:07:39.736 6326.745 - 6351.951: 0.0132% ( 1) 00:07:39.736 6351.951 - 6377.157: 0.0330% ( 3) 00:07:39.736 6377.157 - 6402.363: 0.1450% ( 17) 00:07:39.736 6402.363 - 6427.569: 0.2176% ( 11) 00:07:39.736 6427.569 - 6452.775: 0.2967% ( 12) 00:07:39.736 6452.775 - 6503.188: 0.5076% ( 32) 00:07:39.736 6503.188 - 6553.600: 1.0614% ( 84) 00:07:39.736 6553.600 - 6604.012: 1.6548% ( 90) 00:07:39.736 6604.012 - 6654.425: 2.4591% ( 122) 00:07:39.736 6654.425 - 6704.837: 3.4678% ( 153) 00:07:39.736 6704.837 - 6755.249: 4.2260% ( 115) 00:07:39.736 6755.249 - 6805.662: 5.3468% ( 170) 00:07:39.736 6805.662 - 6856.074: 6.8499% ( 228) 00:07:39.736 6856.074 - 6906.486: 8.5179% ( 253) 00:07:39.736 6906.486 - 6956.898: 9.7178% ( 182) 00:07:39.736 6956.898 - 7007.311: 11.0232% ( 198) 00:07:39.736 7007.311 - 7057.723: 12.5527% ( 232) 00:07:39.736 7057.723 - 7108.135: 14.0493% ( 227) 00:07:39.736 7108.135 - 7158.548: 16.0338% ( 301) 00:07:39.736 7158.548 - 7208.960: 17.8072% ( 269) 00:07:39.736 7208.960 - 7259.372: 19.4884% ( 255) 00:07:39.736 7259.372 - 7309.785: 21.3871% ( 288) 00:07:39.736 7309.785 - 7360.197: 23.4177% ( 308) 00:07:39.736 7360.197 - 7410.609: 25.3494% ( 293) 00:07:39.736 7410.609 - 7461.022: 27.5976% ( 341) 00:07:39.736 7461.022 - 7511.434: 29.6414% ( 310) 00:07:39.736 7511.434 - 7561.846: 31.8302% ( 332) 00:07:39.736 7561.846 - 7612.258: 33.8608% ( 308) 00:07:39.736 7612.258 - 7662.671: 35.6606% ( 273) 00:07:39.736 7662.671 - 7713.083: 37.6319% ( 299) 00:07:39.736 7713.083 - 7763.495: 39.4053% ( 269) 00:07:39.736 7763.495 - 7813.908: 41.3766% ( 299) 00:07:39.736 7813.908 - 7864.320: 42.8600% ( 225) 00:07:39.736 7864.320 - 7914.732: 44.5280% ( 253) 00:07:39.736 7914.732 - 7965.145: 46.3212% ( 272) 00:07:39.736 7965.145 - 8015.557: 48.0024% ( 255) 00:07:39.736 8015.557 - 8065.969: 49.8220% ( 276) 00:07:39.736 8065.969 - 8116.382: 51.5559% ( 263) 00:07:39.736 8116.382 - 8166.794: 53.3426% ( 271) 00:07:39.736 8166.794 - 8217.206: 54.8325% ( 226) 00:07:39.736 8217.206 - 8267.618: 56.1643% ( 202) 00:07:39.736 8267.618 - 8318.031: 57.3906% ( 186) 00:07:39.736 8318.031 - 8368.443: 58.8476% ( 221) 00:07:39.736 8368.443 - 8418.855: 60.5222% ( 254) 00:07:39.736 8418.855 - 8469.268: 62.1110% ( 241) 00:07:39.736 8469.268 - 8519.680: 63.6208% ( 229) 00:07:39.736 8519.680 - 8570.092: 64.9262% ( 198) 00:07:39.736 8570.092 - 8620.505: 66.1063% ( 179) 00:07:39.736 8620.505 - 8670.917: 67.3985% ( 196) 00:07:39.736 8670.917 - 8721.329: 68.7764% ( 209) 00:07:39.736 8721.329 - 8771.742: 69.9367% ( 176) 00:07:39.736 8771.742 - 8822.154: 71.2289% ( 196) 00:07:39.736 8822.154 - 8872.566: 72.4815% ( 190) 00:07:39.736 8872.566 - 8922.978: 73.7803% ( 197) 00:07:39.736 8922.978 - 8973.391: 75.0461% ( 192) 00:07:39.736 8973.391 - 9023.803: 76.1999% ( 175) 00:07:39.736 9023.803 - 9074.215: 77.3470% ( 174) 00:07:39.736 9074.215 - 9124.628: 78.5338% ( 180) 00:07:39.736 9124.628 - 9175.040: 79.5820% ( 159) 00:07:39.736 9175.040 - 9225.452: 80.4457% ( 131) 00:07:39.736 9225.452 - 9275.865: 81.2434% ( 121) 00:07:39.736 9275.865 - 9326.277: 81.9818% ( 112) 00:07:39.736 9326.277 - 9376.689: 82.7861% ( 122) 00:07:39.736 9376.689 - 9427.102: 83.5245% ( 112) 00:07:39.736 9427.102 - 9477.514: 84.1047% ( 88) 00:07:39.736 9477.514 - 9527.926: 84.6585% ( 84) 00:07:39.736 9527.926 - 9578.338: 85.2255% ( 86) 00:07:39.736 9578.338 - 9628.751: 85.7199% ( 75) 00:07:39.736 9628.751 - 9679.163: 86.2342% ( 78) 00:07:39.736 9679.163 - 9729.575: 86.8078% ( 87) 00:07:39.736 9729.575 - 9779.988: 87.3022% ( 75) 00:07:39.736 9779.988 - 9830.400: 87.7373% ( 66) 00:07:39.736 9830.400 - 9880.812: 88.2977% ( 85) 00:07:39.736 9880.812 - 9931.225: 88.8515% ( 84) 00:07:39.736 9931.225 - 9981.637: 89.3394% ( 74) 00:07:39.736 9981.637 - 10032.049: 89.8602% ( 79) 00:07:39.736 10032.049 - 10082.462: 90.4404% ( 88) 00:07:39.736 10082.462 - 10132.874: 90.8492% ( 62) 00:07:39.736 10132.874 - 10183.286: 91.2447% ( 60) 00:07:39.736 10183.286 - 10233.698: 91.6535% ( 62) 00:07:39.736 10233.698 - 10284.111: 92.0425% ( 59) 00:07:39.736 10284.111 - 10334.523: 92.4117% ( 56) 00:07:39.736 10334.523 - 10384.935: 92.7347% ( 49) 00:07:39.736 10384.935 - 10435.348: 93.2160% ( 73) 00:07:39.736 10435.348 - 10485.760: 93.6181% ( 61) 00:07:39.736 10485.760 - 10536.172: 93.9939% ( 57) 00:07:39.736 10536.172 - 10586.585: 94.3895% ( 60) 00:07:39.736 10586.585 - 10636.997: 94.7191% ( 50) 00:07:39.736 10636.997 - 10687.409: 94.9763% ( 39) 00:07:39.736 10687.409 - 10737.822: 95.1938% ( 33) 00:07:39.736 10737.822 - 10788.234: 95.4180% ( 34) 00:07:39.736 10788.234 - 10838.646: 95.5630% ( 22) 00:07:39.736 10838.646 - 10889.058: 95.7081% ( 22) 00:07:39.736 10889.058 - 10939.471: 95.8201% ( 17) 00:07:39.736 10939.471 - 10989.883: 95.9586% ( 21) 00:07:39.736 10989.883 - 11040.295: 96.1366% ( 27) 00:07:39.736 11040.295 - 11090.708: 96.2816% ( 22) 00:07:39.736 11090.708 - 11141.120: 96.4201% ( 21) 00:07:39.736 11141.120 - 11191.532: 96.5783% ( 24) 00:07:39.736 11191.532 - 11241.945: 96.7102% ( 20) 00:07:39.736 11241.945 - 11292.357: 96.8289% ( 18) 00:07:39.736 11292.357 - 11342.769: 96.9739% ( 22) 00:07:39.736 11342.769 - 11393.182: 97.1387% ( 25) 00:07:39.736 11393.182 - 11443.594: 97.2376% ( 15) 00:07:39.736 11443.594 - 11494.006: 97.3761% ( 21) 00:07:39.736 11494.006 - 11544.418: 97.4749% ( 15) 00:07:39.736 11544.418 - 11594.831: 97.5409% ( 10) 00:07:39.736 11594.831 - 11645.243: 97.6134% ( 11) 00:07:39.736 11645.243 - 11695.655: 97.8046% ( 29) 00:07:39.736 11695.655 - 11746.068: 97.9035% ( 15) 00:07:39.736 11746.068 - 11796.480: 97.9892% ( 13) 00:07:39.736 11796.480 - 11846.892: 98.0815% ( 14) 00:07:39.736 11846.892 - 11897.305: 98.1936% ( 17) 00:07:39.736 11897.305 - 11947.717: 98.2793% ( 13) 00:07:39.736 11947.717 - 11998.129: 98.3188% ( 6) 00:07:39.736 11998.129 - 12048.542: 98.4045% ( 13) 00:07:39.736 12048.542 - 12098.954: 98.4705% ( 10) 00:07:39.736 12098.954 - 12149.366: 98.5100% ( 6) 00:07:39.736 12149.366 - 12199.778: 98.5364% ( 4) 00:07:39.736 12199.778 - 12250.191: 98.5957% ( 9) 00:07:39.736 12250.191 - 12300.603: 98.6353% ( 6) 00:07:39.736 12300.603 - 12351.015: 98.6880% ( 8) 00:07:39.736 12351.015 - 12401.428: 98.7540% ( 10) 00:07:39.736 12401.428 - 12451.840: 98.8067% ( 8) 00:07:39.736 12451.840 - 12502.252: 98.8660% ( 9) 00:07:39.736 12502.252 - 12552.665: 98.9188% ( 8) 00:07:39.736 12552.665 - 12603.077: 98.9583% ( 6) 00:07:39.736 12603.077 - 12653.489: 98.9781% ( 3) 00:07:39.736 12653.489 - 12703.902: 98.9913% ( 2) 00:07:39.736 12703.902 - 12754.314: 99.0111% ( 3) 00:07:39.736 12754.314 - 12804.726: 99.0374% ( 4) 00:07:39.736 12804.726 - 12855.138: 99.0572% ( 3) 00:07:39.736 12855.138 - 12905.551: 99.0638% ( 1) 00:07:39.736 12905.551 - 13006.375: 99.1166% ( 8) 00:07:39.736 13006.375 - 13107.200: 99.1561% ( 6) 00:07:39.736 23693.785 - 23794.609: 99.1891% ( 5) 00:07:39.736 23794.609 - 23895.434: 99.2814% ( 14) 00:07:39.736 23895.434 - 23996.258: 99.3539% ( 11) 00:07:39.736 23996.258 - 24097.083: 99.3671% ( 2) 00:07:39.736 24197.908 - 24298.732: 99.3869% ( 3) 00:07:39.736 24399.557 - 24500.382: 99.3935% ( 1) 00:07:39.736 24500.382 - 24601.206: 99.4066% ( 2) 00:07:39.736 24601.206 - 24702.031: 99.4132% ( 1) 00:07:39.736 24702.031 - 24802.855: 99.4330% ( 3) 00:07:39.736 24802.855 - 24903.680: 99.4462% ( 2) 00:07:39.736 24903.680 - 25004.505: 99.4726% ( 4) 00:07:39.736 25004.505 - 25105.329: 99.4924% ( 3) 00:07:39.736 25105.329 - 25206.154: 99.5121% ( 3) 00:07:39.736 25206.154 - 25306.978: 99.5385% ( 4) 00:07:39.736 25306.978 - 25407.803: 99.5517% ( 2) 00:07:39.736 25407.803 - 25508.628: 99.5781% ( 4) 00:07:39.736 29844.086 - 30045.735: 99.6044% ( 4) 00:07:39.736 30045.735 - 30247.385: 99.6572% ( 8) 00:07:39.736 30247.385 - 30449.034: 99.6967% ( 6) 00:07:39.736 30449.034 - 30650.683: 99.7495% ( 8) 00:07:39.736 30650.683 - 30852.332: 99.7956% ( 7) 00:07:39.736 30852.332 - 31053.982: 99.8418% ( 7) 00:07:39.736 31053.982 - 31255.631: 99.8879% ( 7) 00:07:39.736 31255.631 - 31457.280: 99.9275% ( 6) 00:07:39.736 31457.280 - 31658.929: 99.9670% ( 6) 00:07:39.736 31658.929 - 31860.578: 100.0000% ( 5) 00:07:39.736 00:07:39.736 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:39.736 ============================================================================== 00:07:39.736 Range in us Cumulative IO count 00:07:39.736 6326.745 - 6351.951: 0.0132% ( 2) 00:07:39.736 6427.569 - 6452.775: 0.0330% ( 3) 00:07:39.736 6452.775 - 6503.188: 0.0857% ( 8) 00:07:39.736 6503.188 - 6553.600: 0.1450% ( 9) 00:07:39.736 6553.600 - 6604.012: 0.2769% ( 20) 00:07:39.736 6604.012 - 6654.425: 0.5340% ( 39) 00:07:39.736 6654.425 - 6704.837: 0.9494% ( 63) 00:07:39.736 6704.837 - 6755.249: 1.5823% ( 96) 00:07:39.736 6755.249 - 6805.662: 2.4393% ( 130) 00:07:39.736 6805.662 - 6856.074: 3.4876% ( 159) 00:07:39.736 6856.074 - 6906.486: 4.7798% ( 196) 00:07:39.736 6906.486 - 6956.898: 6.3950% ( 245) 00:07:39.736 6956.898 - 7007.311: 8.1685% ( 269) 00:07:39.736 7007.311 - 7057.723: 10.4892% ( 352) 00:07:39.736 7057.723 - 7108.135: 12.7242% ( 339) 00:07:39.736 7108.135 - 7158.548: 14.7547% ( 308) 00:07:39.736 7158.548 - 7208.960: 17.0491% ( 348) 00:07:39.736 7208.960 - 7259.372: 19.5214% ( 375) 00:07:39.736 7259.372 - 7309.785: 21.9211% ( 364) 00:07:39.736 7309.785 - 7360.197: 23.6617% ( 264) 00:07:39.736 7360.197 - 7410.609: 25.4483% ( 271) 00:07:39.736 7410.609 - 7461.022: 27.5580% ( 320) 00:07:39.736 7461.022 - 7511.434: 29.4699% ( 290) 00:07:39.736 7511.434 - 7561.846: 31.3357% ( 283) 00:07:39.736 7561.846 - 7612.258: 33.5641% ( 338) 00:07:39.736 7612.258 - 7662.671: 36.3528% ( 423) 00:07:39.736 7662.671 - 7713.083: 38.8318% ( 376) 00:07:39.736 7713.083 - 7763.495: 41.0601% ( 338) 00:07:39.736 7763.495 - 7813.908: 43.1369% ( 315) 00:07:39.736 7813.908 - 7864.320: 44.9499% ( 275) 00:07:39.736 7864.320 - 7914.732: 46.2619% ( 199) 00:07:39.736 7914.732 - 7965.145: 47.7716% ( 229) 00:07:39.736 7965.145 - 8015.557: 49.3869% ( 245) 00:07:39.736 8015.557 - 8065.969: 50.9230% ( 233) 00:07:39.736 8065.969 - 8116.382: 52.2086% ( 195) 00:07:39.736 8116.382 - 8166.794: 53.4810% ( 193) 00:07:39.736 8166.794 - 8217.206: 54.9380% ( 221) 00:07:39.736 8217.206 - 8267.618: 56.3753% ( 218) 00:07:39.736 8267.618 - 8318.031: 57.6477% ( 193) 00:07:39.736 8318.031 - 8368.443: 58.8542% ( 183) 00:07:39.736 8368.443 - 8418.855: 60.1398% ( 195) 00:07:39.736 8418.855 - 8469.268: 61.3528% ( 184) 00:07:39.736 8469.268 - 8519.680: 62.8692% ( 230) 00:07:39.736 8519.680 - 8570.092: 64.2141% ( 204) 00:07:39.736 8570.092 - 8620.505: 65.6646% ( 220) 00:07:39.737 8620.505 - 8670.917: 67.2007% ( 233) 00:07:39.737 8670.917 - 8721.329: 68.7170% ( 230) 00:07:39.737 8721.329 - 8771.742: 70.4048% ( 256) 00:07:39.737 8771.742 - 8822.154: 71.8948% ( 226) 00:07:39.737 8822.154 - 8872.566: 73.2463% ( 205) 00:07:39.737 8872.566 - 8922.978: 74.7495% ( 228) 00:07:39.737 8922.978 - 8973.391: 75.9032% ( 175) 00:07:39.737 8973.391 - 9023.803: 77.0240% ( 170) 00:07:39.737 9023.803 - 9074.215: 78.1580% ( 172) 00:07:39.737 9074.215 - 9124.628: 79.2458% ( 165) 00:07:39.737 9124.628 - 9175.040: 80.3402% ( 166) 00:07:39.737 9175.040 - 9225.452: 81.2830% ( 143) 00:07:39.737 9225.452 - 9275.865: 82.0082% ( 110) 00:07:39.737 9275.865 - 9326.277: 82.8323% ( 125) 00:07:39.737 9326.277 - 9376.689: 83.5905% ( 115) 00:07:39.737 9376.689 - 9427.102: 84.0717% ( 73) 00:07:39.737 9427.102 - 9477.514: 84.5530% ( 73) 00:07:39.737 9477.514 - 9527.926: 85.1661% ( 93) 00:07:39.737 9527.926 - 9578.338: 85.6804% ( 78) 00:07:39.737 9578.338 - 9628.751: 86.0825% ( 61) 00:07:39.737 9628.751 - 9679.163: 86.4649% ( 58) 00:07:39.737 9679.163 - 9729.575: 86.9001% ( 66) 00:07:39.737 9729.575 - 9779.988: 87.2890% ( 59) 00:07:39.737 9779.988 - 9830.400: 87.7307% ( 67) 00:07:39.737 9830.400 - 9880.812: 88.2186% ( 74) 00:07:39.737 9880.812 - 9931.225: 88.6933% ( 72) 00:07:39.737 9931.225 - 9981.637: 89.2273% ( 81) 00:07:39.737 9981.637 - 10032.049: 89.9328% ( 107) 00:07:39.737 10032.049 - 10082.462: 90.8360% ( 137) 00:07:39.737 10082.462 - 10132.874: 91.4030% ( 86) 00:07:39.737 10132.874 - 10183.286: 91.7656% ( 55) 00:07:39.737 10183.286 - 10233.698: 92.2864% ( 79) 00:07:39.737 10233.698 - 10284.111: 92.9457% ( 100) 00:07:39.737 10284.111 - 10334.523: 93.4204% ( 72) 00:07:39.737 10334.523 - 10384.935: 93.9016% ( 73) 00:07:39.737 10384.935 - 10435.348: 94.3170% ( 63) 00:07:39.737 10435.348 - 10485.760: 94.6532% ( 51) 00:07:39.737 10485.760 - 10536.172: 94.8972% ( 37) 00:07:39.737 10536.172 - 10586.585: 95.1081% ( 32) 00:07:39.737 10586.585 - 10636.997: 95.3455% ( 36) 00:07:39.737 10636.997 - 10687.409: 95.5564% ( 32) 00:07:39.737 10687.409 - 10737.822: 95.7015% ( 22) 00:07:39.737 10737.822 - 10788.234: 95.8201% ( 18) 00:07:39.737 10788.234 - 10838.646: 95.9256% ( 16) 00:07:39.737 10838.646 - 10889.058: 95.9916% ( 10) 00:07:39.737 10889.058 - 10939.471: 96.0575% ( 10) 00:07:39.737 10939.471 - 10989.883: 96.1630% ( 16) 00:07:39.737 10989.883 - 11040.295: 96.2948% ( 20) 00:07:39.737 11040.295 - 11090.708: 96.3805% ( 13) 00:07:39.737 11090.708 - 11141.120: 96.4728% ( 14) 00:07:39.737 11141.120 - 11191.532: 96.6047% ( 20) 00:07:39.737 11191.532 - 11241.945: 97.0200% ( 63) 00:07:39.737 11241.945 - 11292.357: 97.1915% ( 26) 00:07:39.737 11292.357 - 11342.769: 97.3167% ( 19) 00:07:39.737 11342.769 - 11393.182: 97.4288% ( 17) 00:07:39.737 11393.182 - 11443.594: 97.5541% ( 19) 00:07:39.737 11443.594 - 11494.006: 97.7650% ( 32) 00:07:39.737 11494.006 - 11544.418: 97.9299% ( 25) 00:07:39.737 11544.418 - 11594.831: 98.0222% ( 14) 00:07:39.737 11594.831 - 11645.243: 98.1079% ( 13) 00:07:39.737 11645.243 - 11695.655: 98.1738% ( 10) 00:07:39.737 11695.655 - 11746.068: 98.2068% ( 5) 00:07:39.737 11746.068 - 11796.480: 98.2529% ( 7) 00:07:39.737 11796.480 - 11846.892: 98.2925% ( 6) 00:07:39.737 11846.892 - 11897.305: 98.3056% ( 2) 00:07:39.737 11897.305 - 11947.717: 98.3122% ( 1) 00:07:39.737 11947.717 - 11998.129: 98.3452% ( 5) 00:07:39.737 11998.129 - 12048.542: 98.3782% ( 5) 00:07:39.737 12048.542 - 12098.954: 98.4045% ( 4) 00:07:39.737 12098.954 - 12149.366: 98.4309% ( 4) 00:07:39.737 12149.366 - 12199.778: 98.4639% ( 5) 00:07:39.737 12199.778 - 12250.191: 98.6023% ( 21) 00:07:39.737 12250.191 - 12300.603: 98.7276% ( 19) 00:07:39.737 12300.603 - 12351.015: 98.7935% ( 10) 00:07:39.737 12351.015 - 12401.428: 98.8397% ( 7) 00:07:39.737 12401.428 - 12451.840: 98.8990% ( 9) 00:07:39.737 12451.840 - 12502.252: 98.9451% ( 7) 00:07:39.737 12502.252 - 12552.665: 98.9913% ( 7) 00:07:39.737 12552.665 - 12603.077: 99.0111% ( 3) 00:07:39.737 12603.077 - 12653.489: 99.0243% ( 2) 00:07:39.737 12653.489 - 12703.902: 99.0440% ( 3) 00:07:39.737 12703.902 - 12754.314: 99.0638% ( 3) 00:07:39.737 12754.314 - 12804.726: 99.0836% ( 3) 00:07:39.737 12804.726 - 12855.138: 99.1034% ( 3) 00:07:39.737 12855.138 - 12905.551: 99.1166% ( 2) 00:07:39.737 12905.551 - 13006.375: 99.1561% ( 6) 00:07:39.737 22887.188 - 22988.012: 99.1825% ( 4) 00:07:39.737 22988.012 - 23088.837: 99.2089% ( 4) 00:07:39.737 23088.837 - 23189.662: 99.2220% ( 2) 00:07:39.737 23189.662 - 23290.486: 99.2484% ( 4) 00:07:39.737 23290.486 - 23391.311: 99.2748% ( 4) 00:07:39.737 23391.311 - 23492.135: 99.2946% ( 3) 00:07:39.737 23492.135 - 23592.960: 99.3143% ( 3) 00:07:39.737 23592.960 - 23693.785: 99.3407% ( 4) 00:07:39.737 23693.785 - 23794.609: 99.3671% ( 4) 00:07:39.737 23794.609 - 23895.434: 99.3869% ( 3) 00:07:39.737 23895.434 - 23996.258: 99.4132% ( 4) 00:07:39.737 23996.258 - 24097.083: 99.4330% ( 3) 00:07:39.737 24097.083 - 24197.908: 99.4594% ( 4) 00:07:39.737 24197.908 - 24298.732: 99.4792% ( 3) 00:07:39.737 24298.732 - 24399.557: 99.5055% ( 4) 00:07:39.737 24399.557 - 24500.382: 99.5253% ( 3) 00:07:39.737 24500.382 - 24601.206: 99.5517% ( 4) 00:07:39.737 24601.206 - 24702.031: 99.5715% ( 3) 00:07:39.737 24702.031 - 24802.855: 99.5781% ( 1) 00:07:39.737 28230.892 - 28432.542: 99.6242% ( 7) 00:07:39.737 28432.542 - 28634.191: 99.6704% ( 7) 00:07:39.737 28634.191 - 28835.840: 99.7231% ( 8) 00:07:39.737 28835.840 - 29037.489: 99.7758% ( 8) 00:07:39.737 29037.489 - 29239.138: 99.8286% ( 8) 00:07:39.737 29239.138 - 29440.788: 99.8813% ( 8) 00:07:39.737 29440.788 - 29642.437: 99.9341% ( 8) 00:07:39.737 29642.437 - 29844.086: 99.9934% ( 9) 00:07:39.737 29844.086 - 30045.735: 100.0000% ( 1) 00:07:39.737 00:07:39.737 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:39.737 ============================================================================== 00:07:39.737 Range in us Cumulative IO count 00:07:39.737 6351.951 - 6377.157: 0.0066% ( 1) 00:07:39.737 6402.363 - 6427.569: 0.0264% ( 3) 00:07:39.737 6427.569 - 6452.775: 0.0659% ( 6) 00:07:39.737 6452.775 - 6503.188: 0.1384% ( 11) 00:07:39.737 6503.188 - 6553.600: 0.2571% ( 18) 00:07:39.737 6553.600 - 6604.012: 0.4615% ( 31) 00:07:39.737 6604.012 - 6654.425: 0.7911% ( 50) 00:07:39.737 6654.425 - 6704.837: 1.3581% ( 86) 00:07:39.737 6704.837 - 6755.249: 2.0372% ( 103) 00:07:39.737 6755.249 - 6805.662: 2.9931% ( 145) 00:07:39.737 6805.662 - 6856.074: 3.8568% ( 131) 00:07:39.737 6856.074 - 6906.486: 5.0567% ( 182) 00:07:39.737 6906.486 - 6956.898: 6.7379% ( 255) 00:07:39.737 6956.898 - 7007.311: 8.6036% ( 283) 00:07:39.737 7007.311 - 7057.723: 11.1485% ( 386) 00:07:39.737 7057.723 - 7108.135: 13.2845% ( 324) 00:07:39.737 7108.135 - 7158.548: 15.4800% ( 333) 00:07:39.737 7158.548 - 7208.960: 17.9127% ( 369) 00:07:39.737 7208.960 - 7259.372: 19.9169% ( 304) 00:07:39.737 7259.372 - 7309.785: 22.0926% ( 330) 00:07:39.737 7309.785 - 7360.197: 24.0506% ( 297) 00:07:39.737 7360.197 - 7410.609: 26.1274% ( 315) 00:07:39.737 7410.609 - 7461.022: 27.9931% ( 283) 00:07:39.737 7461.022 - 7511.434: 29.8919% ( 288) 00:07:39.737 7511.434 - 7561.846: 32.0543% ( 328) 00:07:39.737 7561.846 - 7612.258: 33.8937% ( 279) 00:07:39.737 7612.258 - 7662.671: 36.0957% ( 334) 00:07:39.737 7662.671 - 7713.083: 38.0274% ( 293) 00:07:39.737 7713.083 - 7763.495: 40.3217% ( 348) 00:07:39.737 7763.495 - 7813.908: 42.5567% ( 339) 00:07:39.737 7813.908 - 7864.320: 44.3829% ( 277) 00:07:39.737 7864.320 - 7914.732: 46.1300% ( 265) 00:07:39.737 7914.732 - 7965.145: 48.4045% ( 345) 00:07:39.737 7965.145 - 8015.557: 50.0461% ( 249) 00:07:39.737 8015.557 - 8065.969: 51.5098% ( 222) 00:07:39.737 8065.969 - 8116.382: 52.9272% ( 215) 00:07:39.737 8116.382 - 8166.794: 54.1667% ( 188) 00:07:39.737 8166.794 - 8217.206: 55.4852% ( 200) 00:07:39.737 8217.206 - 8267.618: 56.6522% ( 177) 00:07:39.737 8267.618 - 8318.031: 57.8850% ( 187) 00:07:39.737 8318.031 - 8368.443: 59.0124% ( 171) 00:07:39.737 8368.443 - 8418.855: 60.3507% ( 203) 00:07:39.737 8418.855 - 8469.268: 61.6495% ( 197) 00:07:39.737 8469.268 - 8519.680: 63.2714% ( 246) 00:07:39.737 8519.680 - 8570.092: 64.7745% ( 228) 00:07:39.737 8570.092 - 8620.505: 66.0733% ( 197) 00:07:39.737 8620.505 - 8670.917: 67.4182% ( 204) 00:07:39.737 8670.917 - 8721.329: 68.9346% ( 230) 00:07:39.737 8721.329 - 8771.742: 70.4312% ( 227) 00:07:39.737 8771.742 - 8822.154: 71.8948% ( 222) 00:07:39.737 8822.154 - 8872.566: 73.3518% ( 221) 00:07:39.737 8872.566 - 8922.978: 74.7890% ( 218) 00:07:39.737 8922.978 - 8973.391: 76.0285% ( 188) 00:07:39.737 8973.391 - 9023.803: 77.2613% ( 187) 00:07:39.737 9023.803 - 9074.215: 78.2173% ( 145) 00:07:39.737 9074.215 - 9124.628: 79.1403% ( 140) 00:07:39.737 9124.628 - 9175.040: 80.0501% ( 138) 00:07:39.737 9175.040 - 9225.452: 80.7753% ( 110) 00:07:39.737 9225.452 - 9275.865: 81.5401% ( 116) 00:07:39.737 9275.865 - 9326.277: 82.1400% ( 91) 00:07:39.737 9326.277 - 9376.689: 82.8982% ( 115) 00:07:39.737 9376.689 - 9427.102: 83.5377% ( 97) 00:07:39.737 9427.102 - 9477.514: 83.9860% ( 68) 00:07:39.737 9477.514 - 9527.926: 84.4607% ( 72) 00:07:39.738 9527.926 - 9578.338: 84.9354% ( 72) 00:07:39.738 9578.338 - 9628.751: 85.7397% ( 122) 00:07:39.738 9628.751 - 9679.163: 86.2671% ( 80) 00:07:39.738 9679.163 - 9729.575: 86.6957% ( 65) 00:07:39.738 9729.575 - 9779.988: 87.1572% ( 70) 00:07:39.738 9779.988 - 9830.400: 87.6450% ( 74) 00:07:39.738 9830.400 - 9880.812: 88.1725% ( 80) 00:07:39.738 9880.812 - 9931.225: 88.7856% ( 93) 00:07:39.738 9931.225 - 9981.637: 89.3262% ( 82) 00:07:39.738 9981.637 - 10032.049: 89.9196% ( 90) 00:07:39.738 10032.049 - 10082.462: 90.5789% ( 100) 00:07:39.738 10082.462 - 10132.874: 91.2447% ( 101) 00:07:39.738 10132.874 - 10183.286: 91.8183% ( 87) 00:07:39.738 10183.286 - 10233.698: 92.5633% ( 113) 00:07:39.738 10233.698 - 10284.111: 93.1105% ( 83) 00:07:39.738 10284.111 - 10334.523: 93.6313% ( 79) 00:07:39.738 10334.523 - 10384.935: 94.1522% ( 79) 00:07:39.738 10384.935 - 10435.348: 94.6071% ( 69) 00:07:39.738 10435.348 - 10485.760: 95.0554% ( 68) 00:07:39.738 10485.760 - 10536.172: 95.4114% ( 54) 00:07:39.738 10536.172 - 10586.585: 95.7410% ( 50) 00:07:39.738 10586.585 - 10636.997: 96.1102% ( 56) 00:07:39.738 10636.997 - 10687.409: 96.3080% ( 30) 00:07:39.738 10687.409 - 10737.822: 96.4267% ( 18) 00:07:39.738 10737.822 - 10788.234: 96.5190% ( 14) 00:07:39.738 10788.234 - 10838.646: 96.6047% ( 13) 00:07:39.738 10838.646 - 10889.058: 96.7497% ( 22) 00:07:39.738 10889.058 - 10939.471: 96.8618% ( 17) 00:07:39.738 10939.471 - 10989.883: 96.9607% ( 15) 00:07:39.738 10989.883 - 11040.295: 97.0464% ( 13) 00:07:39.738 11040.295 - 11090.708: 97.2508% ( 31) 00:07:39.738 11090.708 - 11141.120: 97.3365% ( 13) 00:07:39.738 11141.120 - 11191.532: 97.4024% ( 10) 00:07:39.738 11191.532 - 11241.945: 97.4749% ( 11) 00:07:39.738 11241.945 - 11292.357: 97.5211% ( 7) 00:07:39.738 11292.357 - 11342.769: 97.5804% ( 9) 00:07:39.738 11342.769 - 11393.182: 97.6134% ( 5) 00:07:39.738 11393.182 - 11443.594: 97.6464% ( 5) 00:07:39.738 11443.594 - 11494.006: 97.6727% ( 4) 00:07:39.738 11494.006 - 11544.418: 97.6991% ( 4) 00:07:39.738 11544.418 - 11594.831: 97.7387% ( 6) 00:07:39.738 11594.831 - 11645.243: 97.7716% ( 5) 00:07:39.738 11645.243 - 11695.655: 97.8376% ( 10) 00:07:39.738 11695.655 - 11746.068: 97.8969% ( 9) 00:07:39.738 11746.068 - 11796.480: 97.9496% ( 8) 00:07:39.738 11796.480 - 11846.892: 97.9958% ( 7) 00:07:39.738 11846.892 - 11897.305: 98.1276% ( 20) 00:07:39.738 11897.305 - 11947.717: 98.2397% ( 17) 00:07:39.738 11947.717 - 11998.129: 98.2727% ( 5) 00:07:39.738 11998.129 - 12048.542: 98.3122% ( 6) 00:07:39.738 12048.542 - 12098.954: 98.3452% ( 5) 00:07:39.738 12098.954 - 12149.366: 98.3716% ( 4) 00:07:39.738 12149.366 - 12199.778: 98.5891% ( 33) 00:07:39.738 12199.778 - 12250.191: 98.6221% ( 5) 00:07:39.738 12250.191 - 12300.603: 98.6485% ( 4) 00:07:39.738 12300.603 - 12351.015: 98.6748% ( 4) 00:07:39.738 12351.015 - 12401.428: 98.7012% ( 4) 00:07:39.738 12401.428 - 12451.840: 98.7342% ( 5) 00:07:39.738 12451.840 - 12502.252: 98.7869% ( 8) 00:07:39.738 12502.252 - 12552.665: 98.8265% ( 6) 00:07:39.738 12552.665 - 12603.077: 98.8594% ( 5) 00:07:39.738 12603.077 - 12653.489: 98.8990% ( 6) 00:07:39.738 12653.489 - 12703.902: 99.0045% ( 16) 00:07:39.738 12703.902 - 12754.314: 99.0243% ( 3) 00:07:39.738 12754.314 - 12804.726: 99.0440% ( 3) 00:07:39.738 12804.726 - 12855.138: 99.0638% ( 3) 00:07:39.738 12855.138 - 12905.551: 99.0836% ( 3) 00:07:39.738 12905.551 - 13006.375: 99.1297% ( 7) 00:07:39.738 13006.375 - 13107.200: 99.1561% ( 4) 00:07:39.738 21576.468 - 21677.292: 99.1957% ( 6) 00:07:39.738 21677.292 - 21778.117: 99.2286% ( 5) 00:07:39.738 21778.117 - 21878.942: 99.2814% ( 8) 00:07:39.738 21878.942 - 21979.766: 99.3275% ( 7) 00:07:39.738 21979.766 - 22080.591: 99.3473% ( 3) 00:07:39.738 22080.591 - 22181.415: 99.3737% ( 4) 00:07:39.738 22181.415 - 22282.240: 99.3935% ( 3) 00:07:39.738 22282.240 - 22383.065: 99.4132% ( 3) 00:07:39.738 22383.065 - 22483.889: 99.4330% ( 3) 00:07:39.738 22483.889 - 22584.714: 99.4594% ( 4) 00:07:39.738 22584.714 - 22685.538: 99.4858% ( 4) 00:07:39.738 22685.538 - 22786.363: 99.4989% ( 2) 00:07:39.738 23492.135 - 23592.960: 99.5121% ( 2) 00:07:39.738 23592.960 - 23693.785: 99.5319% ( 3) 00:07:39.738 23693.785 - 23794.609: 99.5583% ( 4) 00:07:39.738 23794.609 - 23895.434: 99.5781% ( 3) 00:07:39.738 26617.698 - 26819.348: 99.5847% ( 1) 00:07:39.738 26819.348 - 27020.997: 99.7363% ( 23) 00:07:39.738 27020.997 - 27222.646: 99.7758% ( 6) 00:07:39.738 28634.191 - 28835.840: 99.7890% ( 2) 00:07:39.738 28835.840 - 29037.489: 99.8418% ( 8) 00:07:39.738 29037.489 - 29239.138: 99.8879% ( 7) 00:07:39.738 29239.138 - 29440.788: 99.9341% ( 7) 00:07:39.738 29440.788 - 29642.437: 99.9802% ( 7) 00:07:39.738 29642.437 - 29844.086: 100.0000% ( 3) 00:07:39.738 00:07:39.738 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:39.738 ============================================================================== 00:07:39.738 Range in us Cumulative IO count 00:07:39.738 6402.363 - 6427.569: 0.0066% ( 1) 00:07:39.738 6427.569 - 6452.775: 0.0198% ( 2) 00:07:39.738 6452.775 - 6503.188: 0.0857% ( 10) 00:07:39.738 6503.188 - 6553.600: 0.2373% ( 23) 00:07:39.738 6553.600 - 6604.012: 0.4285% ( 29) 00:07:39.738 6604.012 - 6654.425: 0.6593% ( 35) 00:07:39.738 6654.425 - 6704.837: 1.4702% ( 123) 00:07:39.738 6704.837 - 6755.249: 1.9910% ( 79) 00:07:39.738 6755.249 - 6805.662: 2.8811% ( 135) 00:07:39.738 6805.662 - 6856.074: 4.1073% ( 186) 00:07:39.738 6856.074 - 6906.486: 5.3797% ( 193) 00:07:39.738 6906.486 - 6956.898: 6.9686% ( 241) 00:07:39.738 6956.898 - 7007.311: 8.5707% ( 243) 00:07:39.738 7007.311 - 7057.723: 10.6540% ( 316) 00:07:39.738 7057.723 - 7108.135: 13.0406% ( 362) 00:07:39.738 7108.135 - 7158.548: 15.3349% ( 348) 00:07:39.738 7158.548 - 7208.960: 17.3325% ( 303) 00:07:39.738 7208.960 - 7259.372: 19.3763% ( 310) 00:07:39.738 7259.372 - 7309.785: 21.6113% ( 339) 00:07:39.738 7309.785 - 7360.197: 23.8924% ( 346) 00:07:39.738 7360.197 - 7410.609: 25.8900% ( 303) 00:07:39.738 7410.609 - 7461.022: 27.7162% ( 277) 00:07:39.738 7461.022 - 7511.434: 29.9710% ( 342) 00:07:39.738 7511.434 - 7561.846: 32.3378% ( 359) 00:07:39.738 7561.846 - 7612.258: 34.2761% ( 294) 00:07:39.738 7612.258 - 7662.671: 36.0891% ( 275) 00:07:39.738 7662.671 - 7713.083: 38.0934% ( 304) 00:07:39.738 7713.083 - 7763.495: 40.0119% ( 291) 00:07:39.738 7763.495 - 7813.908: 41.8908% ( 285) 00:07:39.738 7813.908 - 7864.320: 43.8753% ( 301) 00:07:39.738 7864.320 - 7914.732: 45.6553% ( 270) 00:07:39.738 7914.732 - 7965.145: 47.5541% ( 288) 00:07:39.738 7965.145 - 8015.557: 49.4660% ( 290) 00:07:39.738 8015.557 - 8065.969: 50.9494% ( 225) 00:07:39.738 8065.969 - 8116.382: 52.6503% ( 258) 00:07:39.738 8116.382 - 8166.794: 54.3974% ( 265) 00:07:39.738 8166.794 - 8217.206: 55.7753% ( 209) 00:07:39.738 8217.206 - 8267.618: 56.9554% ( 179) 00:07:39.738 8267.618 - 8318.031: 58.0103% ( 160) 00:07:39.738 8318.031 - 8368.443: 59.2959% ( 195) 00:07:39.738 8368.443 - 8418.855: 60.5485% ( 190) 00:07:39.738 8418.855 - 8469.268: 61.6825% ( 172) 00:07:39.738 8469.268 - 8519.680: 62.9351% ( 190) 00:07:39.738 8519.680 - 8570.092: 64.5174% ( 240) 00:07:39.738 8570.092 - 8620.505: 65.8821% ( 207) 00:07:39.738 8620.505 - 8670.917: 67.5831% ( 258) 00:07:39.738 8670.917 - 8721.329: 69.3829% ( 273) 00:07:39.738 8721.329 - 8771.742: 70.7872% ( 213) 00:07:39.738 8771.742 - 8822.154: 72.1453% ( 206) 00:07:39.738 8822.154 - 8872.566: 73.4836% ( 203) 00:07:39.738 8872.566 - 8922.978: 74.7758% ( 196) 00:07:39.738 8922.978 - 8973.391: 76.0812% ( 198) 00:07:39.738 8973.391 - 9023.803: 77.0833% ( 152) 00:07:39.738 9023.803 - 9074.215: 78.4151% ( 202) 00:07:39.738 9074.215 - 9124.628: 79.5491% ( 172) 00:07:39.738 9124.628 - 9175.040: 80.6105% ( 161) 00:07:39.738 9175.040 - 9225.452: 81.4544% ( 128) 00:07:39.738 9225.452 - 9275.865: 82.1532% ( 106) 00:07:39.738 9275.865 - 9326.277: 82.8191% ( 101) 00:07:39.738 9326.277 - 9376.689: 83.4454% ( 95) 00:07:39.738 9376.689 - 9427.102: 84.0388% ( 90) 00:07:39.738 9427.102 - 9477.514: 84.5530% ( 78) 00:07:39.738 9477.514 - 9527.926: 84.9947% ( 67) 00:07:39.738 9527.926 - 9578.338: 85.4892% ( 75) 00:07:39.738 9578.338 - 9628.751: 85.9177% ( 65) 00:07:39.738 9628.751 - 9679.163: 86.3331% ( 63) 00:07:39.738 9679.163 - 9729.575: 86.8341% ( 76) 00:07:39.738 9729.575 - 9779.988: 87.4407% ( 92) 00:07:39.738 9779.988 - 9830.400: 87.9285% ( 74) 00:07:39.738 9830.400 - 9880.812: 88.4362% ( 77) 00:07:39.738 9880.812 - 9931.225: 89.1152% ( 103) 00:07:39.738 9931.225 - 9981.637: 89.5372% ( 64) 00:07:39.738 9981.637 - 10032.049: 90.0910% ( 84) 00:07:39.738 10032.049 - 10082.462: 90.6711% ( 88) 00:07:39.738 10082.462 - 10132.874: 91.2315% ( 85) 00:07:39.738 10132.874 - 10183.286: 91.7392% ( 77) 00:07:39.738 10183.286 - 10233.698: 92.3853% ( 98) 00:07:39.738 10233.698 - 10284.111: 93.0512% ( 101) 00:07:39.738 10284.111 - 10334.523: 93.6775% ( 95) 00:07:39.738 10334.523 - 10384.935: 94.1917% ( 78) 00:07:39.738 10384.935 - 10435.348: 94.6730% ( 73) 00:07:39.738 10435.348 - 10485.760: 95.0422% ( 56) 00:07:39.738 10485.760 - 10536.172: 95.4707% ( 65) 00:07:39.738 10536.172 - 10586.585: 95.7278% ( 39) 00:07:39.738 10586.585 - 10636.997: 95.9784% ( 38) 00:07:39.738 10636.997 - 10687.409: 96.3014% ( 49) 00:07:39.738 10687.409 - 10737.822: 96.5520% ( 38) 00:07:39.738 10737.822 - 10788.234: 96.6838% ( 20) 00:07:39.738 10788.234 - 10838.646: 96.7893% ( 16) 00:07:39.738 10838.646 - 10889.058: 96.9343% ( 22) 00:07:39.738 10889.058 - 10939.471: 97.0200% ( 13) 00:07:39.738 10939.471 - 10989.883: 97.0728% ( 8) 00:07:39.738 10989.883 - 11040.295: 97.1057% ( 5) 00:07:39.738 11040.295 - 11090.708: 97.1387% ( 5) 00:07:39.738 11090.708 - 11141.120: 97.2112% ( 11) 00:07:39.738 11141.120 - 11191.532: 97.2640% ( 8) 00:07:39.738 11191.532 - 11241.945: 97.3101% ( 7) 00:07:39.738 11241.945 - 11292.357: 97.3497% ( 6) 00:07:39.738 11292.357 - 11342.769: 97.3826% ( 5) 00:07:39.738 11342.769 - 11393.182: 97.4156% ( 5) 00:07:39.738 11393.182 - 11443.594: 97.4749% ( 9) 00:07:39.738 11443.594 - 11494.006: 97.5804% ( 16) 00:07:39.738 11494.006 - 11544.418: 97.6068% ( 4) 00:07:39.738 11544.418 - 11594.831: 97.6398% ( 5) 00:07:39.738 11594.831 - 11645.243: 97.6727% ( 5) 00:07:39.738 11645.243 - 11695.655: 97.7255% ( 8) 00:07:39.738 11695.655 - 11746.068: 97.7782% ( 8) 00:07:39.738 11746.068 - 11796.480: 97.8376% ( 9) 00:07:39.738 11796.480 - 11846.892: 97.8969% ( 9) 00:07:39.738 11846.892 - 11897.305: 98.1540% ( 39) 00:07:39.738 11897.305 - 11947.717: 98.1738% ( 3) 00:07:39.738 11947.717 - 11998.129: 98.2068% ( 5) 00:07:39.738 11998.129 - 12048.542: 98.2529% ( 7) 00:07:39.738 12048.542 - 12098.954: 98.2991% ( 7) 00:07:39.738 12098.954 - 12149.366: 98.3914% ( 14) 00:07:39.738 12149.366 - 12199.778: 98.5364% ( 22) 00:07:39.738 12199.778 - 12250.191: 98.6617% ( 19) 00:07:39.738 12250.191 - 12300.603: 98.7012% ( 6) 00:07:39.738 12300.603 - 12351.015: 98.7276% ( 4) 00:07:39.739 12351.015 - 12401.428: 98.7342% ( 1) 00:07:39.739 12451.840 - 12502.252: 98.7737% ( 6) 00:07:39.739 12502.252 - 12552.665: 98.8067% ( 5) 00:07:39.739 12552.665 - 12603.077: 98.8528% ( 7) 00:07:39.739 12603.077 - 12653.489: 98.8924% ( 6) 00:07:39.739 12653.489 - 12703.902: 98.9320% ( 6) 00:07:39.739 12703.902 - 12754.314: 98.9649% ( 5) 00:07:39.739 12754.314 - 12804.726: 98.9979% ( 5) 00:07:39.739 12804.726 - 12855.138: 99.0177% ( 3) 00:07:39.739 12855.138 - 12905.551: 99.0440% ( 4) 00:07:39.739 12905.551 - 13006.375: 99.0836% ( 6) 00:07:39.739 13006.375 - 13107.200: 99.1232% ( 6) 00:07:39.739 13107.200 - 13208.025: 99.1561% ( 5) 00:07:39.739 19156.677 - 19257.502: 99.1627% ( 1) 00:07:39.739 19257.502 - 19358.326: 99.2089% ( 7) 00:07:39.739 19358.326 - 19459.151: 99.2352% ( 4) 00:07:39.739 19459.151 - 19559.975: 99.2550% ( 3) 00:07:39.739 19559.975 - 19660.800: 99.2748% ( 3) 00:07:39.739 19660.800 - 19761.625: 99.3012% ( 4) 00:07:39.739 19761.625 - 19862.449: 99.3209% ( 3) 00:07:39.739 19862.449 - 19963.274: 99.3473% ( 4) 00:07:39.739 19963.274 - 20064.098: 99.3671% ( 3) 00:07:39.739 20064.098 - 20164.923: 99.3935% ( 4) 00:07:39.739 20164.923 - 20265.748: 99.4198% ( 4) 00:07:39.739 20265.748 - 20366.572: 99.4396% ( 3) 00:07:39.739 20366.572 - 20467.397: 99.4660% ( 4) 00:07:39.739 20467.397 - 20568.222: 99.4858% ( 3) 00:07:39.739 20568.222 - 20669.046: 99.5121% ( 4) 00:07:39.739 20669.046 - 20769.871: 99.5385% ( 4) 00:07:39.739 20769.871 - 20870.695: 99.5649% ( 4) 00:07:39.739 20870.695 - 20971.520: 99.5781% ( 2) 00:07:39.739 25811.102 - 26012.751: 99.6176% ( 6) 00:07:39.739 26012.751 - 26214.400: 99.6704% ( 8) 00:07:39.739 26214.400 - 26416.049: 99.7165% ( 7) 00:07:39.739 26416.049 - 26617.698: 99.7693% ( 8) 00:07:39.739 26617.698 - 26819.348: 99.8220% ( 8) 00:07:39.739 26819.348 - 27020.997: 99.8747% ( 8) 00:07:39.739 27020.997 - 27222.646: 99.9275% ( 8) 00:07:39.739 27222.646 - 27424.295: 99.9736% ( 7) 00:07:39.739 27424.295 - 27625.945: 100.0000% ( 4) 00:07:39.739 00:07:39.739 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:39.739 ============================================================================== 00:07:39.739 Range in us Cumulative IO count 00:07:39.739 6276.332 - 6301.538: 0.0066% ( 1) 00:07:39.739 6351.951 - 6377.157: 0.0198% ( 2) 00:07:39.739 6377.157 - 6402.363: 0.0264% ( 1) 00:07:39.739 6402.363 - 6427.569: 0.0461% ( 3) 00:07:39.739 6427.569 - 6452.775: 0.0659% ( 3) 00:07:39.739 6452.775 - 6503.188: 0.1319% ( 10) 00:07:39.739 6503.188 - 6553.600: 0.2967% ( 25) 00:07:39.739 6553.600 - 6604.012: 0.6857% ( 59) 00:07:39.739 6604.012 - 6654.425: 1.1472% ( 70) 00:07:39.739 6654.425 - 6704.837: 1.6548% ( 77) 00:07:39.739 6704.837 - 6755.249: 2.1888% ( 81) 00:07:39.739 6755.249 - 6805.662: 3.0723% ( 134) 00:07:39.739 6805.662 - 6856.074: 4.1864% ( 169) 00:07:39.739 6856.074 - 6906.486: 5.5446% ( 206) 00:07:39.739 6906.486 - 6956.898: 7.0280% ( 225) 00:07:39.739 6956.898 - 7007.311: 8.8674% ( 279) 00:07:39.739 7007.311 - 7057.723: 10.5881% ( 261) 00:07:39.739 7057.723 - 7108.135: 12.9483% ( 358) 00:07:39.739 7108.135 - 7158.548: 15.1042% ( 327) 00:07:39.739 7158.548 - 7208.960: 17.3391% ( 339) 00:07:39.739 7208.960 - 7259.372: 19.5939% ( 342) 00:07:39.739 7259.372 - 7309.785: 21.4333% ( 279) 00:07:39.739 7309.785 - 7360.197: 23.2991% ( 283) 00:07:39.739 7360.197 - 7410.609: 25.4351% ( 324) 00:07:39.739 7410.609 - 7461.022: 27.8085% ( 360) 00:07:39.739 7461.022 - 7511.434: 29.9248% ( 321) 00:07:39.739 7511.434 - 7561.846: 32.1268% ( 334) 00:07:39.739 7561.846 - 7612.258: 34.3091% ( 331) 00:07:39.739 7612.258 - 7662.671: 36.5111% ( 334) 00:07:39.739 7662.671 - 7713.083: 38.2977% ( 271) 00:07:39.739 7713.083 - 7763.495: 40.4734% ( 330) 00:07:39.739 7763.495 - 7813.908: 42.3325% ( 282) 00:07:39.739 7813.908 - 7864.320: 44.0335% ( 258) 00:07:39.739 7864.320 - 7914.732: 45.7542% ( 261) 00:07:39.739 7914.732 - 7965.145: 47.3629% ( 244) 00:07:39.739 7965.145 - 8015.557: 49.2484% ( 286) 00:07:39.739 8015.557 - 8065.969: 51.1340% ( 286) 00:07:39.739 8065.969 - 8116.382: 52.8547% ( 261) 00:07:39.739 8116.382 - 8166.794: 54.3381% ( 225) 00:07:39.739 8166.794 - 8217.206: 55.6105% ( 193) 00:07:39.739 8217.206 - 8267.618: 56.7972% ( 180) 00:07:39.739 8267.618 - 8318.031: 58.1092% ( 199) 00:07:39.739 8318.031 - 8368.443: 59.3025% ( 181) 00:07:39.739 8368.443 - 8418.855: 60.6870% ( 210) 00:07:39.739 8418.855 - 8469.268: 62.0517% ( 207) 00:07:39.739 8469.268 - 8519.680: 63.5153% ( 222) 00:07:39.739 8519.680 - 8570.092: 64.9196% ( 213) 00:07:39.739 8570.092 - 8620.505: 66.2711% ( 205) 00:07:39.739 8620.505 - 8670.917: 67.8797% ( 244) 00:07:39.739 8670.917 - 8721.329: 69.4818% ( 243) 00:07:39.739 8721.329 - 8771.742: 70.7476% ( 192) 00:07:39.739 8771.742 - 8822.154: 71.9607% ( 184) 00:07:39.739 8822.154 - 8872.566: 73.3716% ( 214) 00:07:39.739 8872.566 - 8922.978: 74.9143% ( 234) 00:07:39.739 8922.978 - 8973.391: 75.9955% ( 164) 00:07:39.739 8973.391 - 9023.803: 77.1427% ( 174) 00:07:39.739 9023.803 - 9074.215: 78.1777% ( 157) 00:07:39.739 9074.215 - 9124.628: 79.4831% ( 198) 00:07:39.739 9124.628 - 9175.040: 80.6830% ( 182) 00:07:39.739 9175.040 - 9225.452: 81.5467% ( 131) 00:07:39.739 9225.452 - 9275.865: 82.3510% ( 122) 00:07:39.739 9275.865 - 9326.277: 83.0696% ( 109) 00:07:39.739 9326.277 - 9376.689: 83.8080% ( 112) 00:07:39.739 9376.689 - 9427.102: 84.4607% ( 99) 00:07:39.739 9427.102 - 9477.514: 84.9024% ( 67) 00:07:39.739 9477.514 - 9527.926: 85.3903% ( 74) 00:07:39.739 9527.926 - 9578.338: 85.8518% ( 70) 00:07:39.739 9578.338 - 9628.751: 86.3067% ( 69) 00:07:39.739 9628.751 - 9679.163: 86.9066% ( 91) 00:07:39.739 9679.163 - 9729.575: 87.5000% ( 90) 00:07:39.739 9729.575 - 9779.988: 87.9417% ( 67) 00:07:39.739 9779.988 - 9830.400: 88.3966% ( 69) 00:07:39.739 9830.400 - 9880.812: 88.7526% ( 54) 00:07:39.739 9880.812 - 9931.225: 89.1021% ( 53) 00:07:39.739 9931.225 - 9981.637: 89.4251% ( 49) 00:07:39.739 9981.637 - 10032.049: 89.7877% ( 55) 00:07:39.739 10032.049 - 10082.462: 90.2426% ( 69) 00:07:39.739 10082.462 - 10132.874: 90.7832% ( 82) 00:07:39.739 10132.874 - 10183.286: 91.3898% ( 92) 00:07:39.739 10183.286 - 10233.698: 92.1809% ( 120) 00:07:39.739 10233.698 - 10284.111: 92.7874% ( 92) 00:07:39.739 10284.111 - 10334.523: 93.6116% ( 125) 00:07:39.739 10334.523 - 10384.935: 94.0269% ( 63) 00:07:39.739 10384.935 - 10435.348: 94.3631% ( 51) 00:07:39.739 10435.348 - 10485.760: 94.7785% ( 63) 00:07:39.739 10485.760 - 10536.172: 95.1081% ( 50) 00:07:39.739 10536.172 - 10586.585: 95.4114% ( 46) 00:07:39.739 10586.585 - 10636.997: 95.6685% ( 39) 00:07:39.739 10636.997 - 10687.409: 95.8465% ( 27) 00:07:39.739 10687.409 - 10737.822: 96.0113% ( 25) 00:07:39.739 10737.822 - 10788.234: 96.2948% ( 43) 00:07:39.739 10788.234 - 10838.646: 96.4069% ( 17) 00:07:39.739 10838.646 - 10889.058: 96.5717% ( 25) 00:07:39.739 10889.058 - 10939.471: 96.6772% ( 16) 00:07:39.739 10939.471 - 10989.883: 96.8552% ( 27) 00:07:39.739 10989.883 - 11040.295: 96.9080% ( 8) 00:07:39.739 11040.295 - 11090.708: 96.9871% ( 12) 00:07:39.739 11090.708 - 11141.120: 97.0464% ( 9) 00:07:39.739 11141.120 - 11191.532: 97.1189% ( 11) 00:07:39.739 11191.532 - 11241.945: 97.1585% ( 6) 00:07:39.739 11241.945 - 11292.357: 97.1980% ( 6) 00:07:39.739 11292.357 - 11342.769: 97.2903% ( 14) 00:07:39.739 11342.769 - 11393.182: 97.4420% ( 23) 00:07:39.739 11393.182 - 11443.594: 97.4749% ( 5) 00:07:39.739 11443.594 - 11494.006: 97.5277% ( 8) 00:07:39.739 11494.006 - 11544.418: 97.5738% ( 7) 00:07:39.739 11544.418 - 11594.831: 97.6398% ( 10) 00:07:39.739 11594.831 - 11645.243: 97.6859% ( 7) 00:07:39.739 11645.243 - 11695.655: 97.7321% ( 7) 00:07:39.739 11695.655 - 11746.068: 97.9233% ( 29) 00:07:39.739 11746.068 - 11796.480: 98.0156% ( 14) 00:07:39.739 11796.480 - 11846.892: 98.0749% ( 9) 00:07:39.739 11846.892 - 11897.305: 98.1342% ( 9) 00:07:39.739 11897.305 - 11947.717: 98.2133% ( 12) 00:07:39.739 11947.717 - 11998.129: 98.2859% ( 11) 00:07:39.739 11998.129 - 12048.542: 98.3782% ( 14) 00:07:39.739 12048.542 - 12098.954: 98.4375% ( 9) 00:07:39.739 12098.954 - 12149.366: 98.4836% ( 7) 00:07:39.739 12149.366 - 12199.778: 98.5298% ( 7) 00:07:39.739 12199.778 - 12250.191: 98.5562% ( 4) 00:07:39.739 12250.191 - 12300.603: 98.5825% ( 4) 00:07:39.739 12300.603 - 12351.015: 98.6419% ( 9) 00:07:39.740 12351.015 - 12401.428: 98.6814% ( 6) 00:07:39.740 12401.428 - 12451.840: 98.7408% ( 9) 00:07:39.740 12451.840 - 12502.252: 98.8660% ( 19) 00:07:39.740 12502.252 - 12552.665: 98.9254% ( 9) 00:07:39.740 12552.665 - 12603.077: 98.9583% ( 5) 00:07:39.740 12603.077 - 12653.489: 98.9979% ( 6) 00:07:39.740 12653.489 - 12703.902: 99.0374% ( 6) 00:07:39.740 12703.902 - 12754.314: 99.0572% ( 3) 00:07:39.740 12754.314 - 12804.726: 99.0770% ( 3) 00:07:39.740 12804.726 - 12855.138: 99.0902% ( 2) 00:07:39.740 12855.138 - 12905.551: 99.1034% ( 2) 00:07:39.740 12905.551 - 13006.375: 99.1363% ( 5) 00:07:39.740 13006.375 - 13107.200: 99.1561% ( 3) 00:07:39.740 18148.431 - 18249.255: 99.2023% ( 7) 00:07:39.740 18249.255 - 18350.080: 99.2682% ( 10) 00:07:39.740 18350.080 - 18450.905: 99.3275% ( 9) 00:07:39.740 18450.905 - 18551.729: 99.3473% ( 3) 00:07:39.740 18551.729 - 18652.554: 99.3737% ( 4) 00:07:39.740 18652.554 - 18753.378: 99.3935% ( 3) 00:07:39.740 18753.378 - 18854.203: 99.4198% ( 4) 00:07:39.740 18854.203 - 18955.028: 99.4462% ( 4) 00:07:39.740 18955.028 - 19055.852: 99.4660% ( 3) 00:07:39.740 19055.852 - 19156.677: 99.4858% ( 3) 00:07:39.740 19156.677 - 19257.502: 99.5055% ( 3) 00:07:39.740 19257.502 - 19358.326: 99.5319% ( 4) 00:07:39.740 19358.326 - 19459.151: 99.5517% ( 3) 00:07:39.740 19459.151 - 19559.975: 99.5715% ( 3) 00:07:39.740 19559.975 - 19660.800: 99.5781% ( 1) 00:07:39.740 23592.960 - 23693.785: 99.5847% ( 1) 00:07:39.740 23693.785 - 23794.609: 99.6242% ( 6) 00:07:39.740 23794.609 - 23895.434: 99.6704% ( 7) 00:07:39.740 23895.434 - 23996.258: 99.7099% ( 6) 00:07:39.740 24802.855 - 24903.680: 99.7231% ( 2) 00:07:39.740 24903.680 - 25004.505: 99.7495% ( 4) 00:07:39.740 25004.505 - 25105.329: 99.7758% ( 4) 00:07:39.740 25105.329 - 25206.154: 99.7956% ( 3) 00:07:39.740 25206.154 - 25306.978: 99.8220% ( 4) 00:07:39.740 25306.978 - 25407.803: 99.8418% ( 3) 00:07:39.740 25407.803 - 25508.628: 99.8681% ( 4) 00:07:39.740 25508.628 - 25609.452: 99.8879% ( 3) 00:07:39.740 25609.452 - 25710.277: 99.9143% ( 4) 00:07:39.740 25710.277 - 25811.102: 99.9341% ( 3) 00:07:39.740 25811.102 - 26012.751: 99.9802% ( 7) 00:07:39.740 26012.751 - 26214.400: 100.0000% ( 3) 00:07:39.740 00:07:39.740 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:39.740 ============================================================================== 00:07:39.740 Range in us Cumulative IO count 00:07:39.740 6326.745 - 6351.951: 0.0066% ( 1) 00:07:39.740 6351.951 - 6377.157: 0.0132% ( 1) 00:07:39.740 6377.157 - 6402.363: 0.0264% ( 2) 00:07:39.740 6402.363 - 6427.569: 0.0330% ( 1) 00:07:39.740 6427.569 - 6452.775: 0.0791% ( 7) 00:07:39.740 6452.775 - 6503.188: 0.1912% ( 17) 00:07:39.740 6503.188 - 6553.600: 0.3758% ( 28) 00:07:39.740 6553.600 - 6604.012: 0.6857% ( 47) 00:07:39.740 6604.012 - 6654.425: 1.1472% ( 70) 00:07:39.740 6654.425 - 6704.837: 1.7075% ( 85) 00:07:39.740 6704.837 - 6755.249: 2.2547% ( 83) 00:07:39.740 6755.249 - 6805.662: 3.1052% ( 129) 00:07:39.740 6805.662 - 6856.074: 4.1996% ( 166) 00:07:39.740 6856.074 - 6906.486: 5.4193% ( 185) 00:07:39.740 6906.486 - 6956.898: 6.9159% ( 227) 00:07:39.740 6956.898 - 7007.311: 8.6498% ( 263) 00:07:39.740 7007.311 - 7057.723: 10.5551% ( 289) 00:07:39.740 7057.723 - 7108.135: 12.3879% ( 278) 00:07:39.740 7108.135 - 7158.548: 14.4053% ( 306) 00:07:39.740 7158.548 - 7208.960: 16.8315% ( 368) 00:07:39.740 7208.960 - 7259.372: 18.7764% ( 295) 00:07:39.740 7259.372 - 7309.785: 21.0311% ( 342) 00:07:39.740 7309.785 - 7360.197: 22.9760% ( 295) 00:07:39.740 7360.197 - 7410.609: 25.2901% ( 351) 00:07:39.740 7410.609 - 7461.022: 27.6635% ( 360) 00:07:39.740 7461.022 - 7511.434: 30.5578% ( 439) 00:07:39.740 7511.434 - 7561.846: 32.5422% ( 301) 00:07:39.740 7561.846 - 7612.258: 34.5332% ( 302) 00:07:39.740 7612.258 - 7662.671: 37.0385% ( 380) 00:07:39.740 7662.671 - 7713.083: 38.9768% ( 294) 00:07:39.740 7713.083 - 7763.495: 41.0140% ( 309) 00:07:39.740 7763.495 - 7813.908: 42.7083% ( 257) 00:07:39.740 7813.908 - 7864.320: 44.2774% ( 238) 00:07:39.740 7864.320 - 7914.732: 45.7213% ( 219) 00:07:39.740 7914.732 - 7965.145: 47.6991% ( 300) 00:07:39.740 7965.145 - 8015.557: 49.3341% ( 248) 00:07:39.740 8015.557 - 8065.969: 51.0812% ( 265) 00:07:39.740 8065.969 - 8116.382: 52.4196% ( 203) 00:07:39.740 8116.382 - 8166.794: 53.6392% ( 185) 00:07:39.740 8166.794 - 8217.206: 54.7666% ( 171) 00:07:39.740 8217.206 - 8267.618: 55.9599% ( 181) 00:07:39.740 8267.618 - 8318.031: 57.3246% ( 207) 00:07:39.740 8318.031 - 8368.443: 58.5377% ( 184) 00:07:39.740 8368.443 - 8418.855: 59.7046% ( 177) 00:07:39.740 8418.855 - 8469.268: 61.1089% ( 213) 00:07:39.740 8469.268 - 8519.680: 62.6253% ( 230) 00:07:39.740 8519.680 - 8570.092: 64.4976% ( 284) 00:07:39.740 8570.092 - 8620.505: 66.1195% ( 246) 00:07:39.740 8620.505 - 8670.917: 67.9325% ( 275) 00:07:39.740 8670.917 - 8721.329: 69.8906% ( 297) 00:07:39.740 8721.329 - 8771.742: 71.5124% ( 246) 00:07:39.740 8771.742 - 8822.154: 72.8903% ( 209) 00:07:39.740 8822.154 - 8872.566: 74.0704% ( 179) 00:07:39.740 8872.566 - 8922.978: 75.4219% ( 205) 00:07:39.740 8922.978 - 8973.391: 76.8328% ( 214) 00:07:39.740 8973.391 - 9023.803: 77.9866% ( 175) 00:07:39.740 9023.803 - 9074.215: 79.2590% ( 193) 00:07:39.740 9074.215 - 9124.628: 80.3204% ( 161) 00:07:39.740 9124.628 - 9175.040: 81.3291% ( 153) 00:07:39.740 9175.040 - 9225.452: 82.2323% ( 137) 00:07:39.740 9225.452 - 9275.865: 83.0103% ( 118) 00:07:39.740 9275.865 - 9326.277: 83.5641% ( 84) 00:07:39.740 9326.277 - 9376.689: 83.9794% ( 63) 00:07:39.740 9376.689 - 9427.102: 84.4871% ( 77) 00:07:39.740 9427.102 - 9477.514: 85.0343% ( 83) 00:07:39.740 9477.514 - 9527.926: 85.4562% ( 64) 00:07:39.740 9527.926 - 9578.338: 85.9045% ( 68) 00:07:39.740 9578.338 - 9628.751: 86.3660% ( 70) 00:07:39.740 9628.751 - 9679.163: 86.7814% ( 63) 00:07:39.740 9679.163 - 9729.575: 87.5066% ( 110) 00:07:39.740 9729.575 - 9779.988: 87.9813% ( 72) 00:07:39.740 9779.988 - 9830.400: 88.3834% ( 61) 00:07:39.740 9830.400 - 9880.812: 88.7131% ( 50) 00:07:39.740 9880.812 - 9931.225: 89.2009% ( 74) 00:07:39.740 9931.225 - 9981.637: 89.6097% ( 62) 00:07:39.740 9981.637 - 10032.049: 90.1437% ( 81) 00:07:39.740 10032.049 - 10082.462: 90.6580% ( 78) 00:07:39.740 10082.462 - 10132.874: 91.2118% ( 84) 00:07:39.740 10132.874 - 10183.286: 91.9106% ( 106) 00:07:39.740 10183.286 - 10233.698: 92.4380% ( 80) 00:07:39.740 10233.698 - 10284.111: 92.9786% ( 82) 00:07:39.740 10284.111 - 10334.523: 93.3808% ( 61) 00:07:39.740 10334.523 - 10384.935: 93.6907% ( 47) 00:07:39.740 10384.935 - 10435.348: 94.1456% ( 69) 00:07:39.740 10435.348 - 10485.760: 94.4686% ( 49) 00:07:39.740 10485.760 - 10536.172: 94.7785% ( 47) 00:07:39.740 10536.172 - 10586.585: 95.1345% ( 54) 00:07:39.740 10586.585 - 10636.997: 95.3982% ( 40) 00:07:39.740 10636.997 - 10687.409: 95.6224% ( 34) 00:07:39.740 10687.409 - 10737.822: 95.9256% ( 46) 00:07:39.740 10737.822 - 10788.234: 96.1168% ( 29) 00:07:39.740 10788.234 - 10838.646: 96.2157% ( 15) 00:07:39.740 10838.646 - 10889.058: 96.3146% ( 15) 00:07:39.740 10889.058 - 10939.471: 96.5454% ( 35) 00:07:39.740 10939.471 - 10989.883: 96.6443% ( 15) 00:07:39.740 10989.883 - 11040.295: 96.7893% ( 22) 00:07:39.741 11040.295 - 11090.708: 96.9211% ( 20) 00:07:39.741 11090.708 - 11141.120: 96.9541% ( 5) 00:07:39.741 11141.120 - 11191.532: 97.0200% ( 10) 00:07:39.741 11191.532 - 11241.945: 97.0860% ( 10) 00:07:39.741 11241.945 - 11292.357: 97.1255% ( 6) 00:07:39.741 11292.357 - 11342.769: 97.1519% ( 4) 00:07:39.741 11342.769 - 11393.182: 97.1783% ( 4) 00:07:39.741 11393.182 - 11443.594: 97.2376% ( 9) 00:07:39.741 11443.594 - 11494.006: 97.4354% ( 30) 00:07:39.741 11494.006 - 11544.418: 97.4749% ( 6) 00:07:39.741 11544.418 - 11594.831: 97.5475% ( 11) 00:07:39.741 11594.831 - 11645.243: 97.7387% ( 29) 00:07:39.741 11645.243 - 11695.655: 97.8046% ( 10) 00:07:39.741 11695.655 - 11746.068: 97.8639% ( 9) 00:07:39.741 11746.068 - 11796.480: 97.8969% ( 5) 00:07:39.741 11796.480 - 11846.892: 97.9299% ( 5) 00:07:39.741 11846.892 - 11897.305: 97.9562% ( 4) 00:07:39.741 11897.305 - 11947.717: 97.9892% ( 5) 00:07:39.741 11947.717 - 11998.129: 98.0881% ( 15) 00:07:39.741 11998.129 - 12048.542: 98.1936% ( 16) 00:07:39.741 12048.542 - 12098.954: 98.3122% ( 18) 00:07:39.741 12098.954 - 12149.366: 98.4177% ( 16) 00:07:39.741 12149.366 - 12199.778: 98.5694% ( 23) 00:07:39.741 12199.778 - 12250.191: 98.7605% ( 29) 00:07:39.741 12250.191 - 12300.603: 98.8397% ( 12) 00:07:39.741 12300.603 - 12351.015: 98.8990% ( 9) 00:07:39.741 12351.015 - 12401.428: 98.9517% ( 8) 00:07:39.741 12401.428 - 12451.840: 98.9847% ( 5) 00:07:39.741 12451.840 - 12502.252: 99.0309% ( 7) 00:07:39.741 12502.252 - 12552.665: 99.0638% ( 5) 00:07:39.741 12552.665 - 12603.077: 99.0770% ( 2) 00:07:39.741 12603.077 - 12653.489: 99.0902% ( 2) 00:07:39.741 12653.489 - 12703.902: 99.1034% ( 2) 00:07:39.741 12703.902 - 12754.314: 99.1166% ( 2) 00:07:39.741 12754.314 - 12804.726: 99.1297% ( 2) 00:07:39.741 12804.726 - 12855.138: 99.1429% ( 2) 00:07:39.741 12855.138 - 12905.551: 99.1561% ( 2) 00:07:39.741 15829.465 - 15930.289: 99.1693% ( 2) 00:07:39.741 15930.289 - 16031.114: 99.1957% ( 4) 00:07:39.741 16031.114 - 16131.938: 99.2220% ( 4) 00:07:39.741 16131.938 - 16232.763: 99.2484% ( 4) 00:07:39.741 16232.763 - 16333.588: 99.2748% ( 4) 00:07:39.741 16333.588 - 16434.412: 99.3012% ( 4) 00:07:39.741 16434.412 - 16535.237: 99.3275% ( 4) 00:07:39.741 16535.237 - 16636.062: 99.3539% ( 4) 00:07:39.741 16636.062 - 16736.886: 99.3803% ( 4) 00:07:39.741 16736.886 - 16837.711: 99.4066% ( 4) 00:07:39.741 16837.711 - 16938.535: 99.4264% ( 3) 00:07:39.741 16938.535 - 17039.360: 99.4528% ( 4) 00:07:39.741 17039.360 - 17140.185: 99.4726% ( 3) 00:07:39.741 17140.185 - 17241.009: 99.4989% ( 4) 00:07:39.741 17241.009 - 17341.834: 99.5253% ( 4) 00:07:39.741 17341.834 - 17442.658: 99.5517% ( 4) 00:07:39.741 17442.658 - 17543.483: 99.5781% ( 4) 00:07:39.741 22786.363 - 22887.188: 99.5978% ( 3) 00:07:39.741 22887.188 - 22988.012: 99.6242% ( 4) 00:07:39.741 22988.012 - 23088.837: 99.6440% ( 3) 00:07:39.741 23088.837 - 23189.662: 99.6704% ( 4) 00:07:39.741 23189.662 - 23290.486: 99.6967% ( 4) 00:07:39.741 23290.486 - 23391.311: 99.7165% ( 3) 00:07:39.741 23391.311 - 23492.135: 99.7429% ( 4) 00:07:39.741 23492.135 - 23592.960: 99.7627% ( 3) 00:07:39.741 23592.960 - 23693.785: 99.7890% ( 4) 00:07:39.741 23693.785 - 23794.609: 99.8154% ( 4) 00:07:39.741 23794.609 - 23895.434: 99.8352% ( 3) 00:07:39.741 23895.434 - 23996.258: 99.8616% ( 4) 00:07:39.741 23996.258 - 24097.083: 99.8813% ( 3) 00:07:39.741 24097.083 - 24197.908: 99.9077% ( 4) 00:07:39.741 24197.908 - 24298.732: 99.9341% ( 4) 00:07:39.741 24298.732 - 24399.557: 99.9604% ( 4) 00:07:39.741 24399.557 - 24500.382: 99.9868% ( 4) 00:07:39.741 24500.382 - 24601.206: 100.0000% ( 2) 00:07:39.741 00:07:39.741 12:05:40 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:07:39.741 00:07:39.741 real 0m2.499s 00:07:39.741 user 0m2.188s 00:07:39.741 sys 0m0.203s 00:07:39.741 12:05:40 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:39.741 ************************************ 00:07:39.741 END TEST nvme_perf 00:07:39.741 ************************************ 00:07:39.741 12:05:40 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:07:39.741 12:05:40 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:39.741 12:05:40 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:39.741 12:05:40 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:39.741 12:05:40 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:39.741 ************************************ 00:07:39.741 START TEST nvme_hello_world 00:07:39.741 ************************************ 00:07:39.741 12:05:40 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:39.741 Initializing NVMe Controllers 00:07:39.741 Attached to 0000:00:10.0 00:07:39.741 Namespace ID: 1 size: 6GB 00:07:39.741 Attached to 0000:00:11.0 00:07:39.741 Namespace ID: 1 size: 5GB 00:07:39.741 Attached to 0000:00:13.0 00:07:39.741 Namespace ID: 1 size: 1GB 00:07:39.741 Attached to 0000:00:12.0 00:07:39.741 Namespace ID: 1 size: 4GB 00:07:39.741 Namespace ID: 2 size: 4GB 00:07:39.741 Namespace ID: 3 size: 4GB 00:07:39.741 Initialization complete. 00:07:39.741 INFO: using host memory buffer for IO 00:07:39.741 Hello world! 00:07:39.741 INFO: using host memory buffer for IO 00:07:39.741 Hello world! 00:07:39.741 INFO: using host memory buffer for IO 00:07:39.741 Hello world! 00:07:39.741 INFO: using host memory buffer for IO 00:07:39.741 Hello world! 00:07:39.741 INFO: using host memory buffer for IO 00:07:39.741 Hello world! 00:07:39.741 INFO: using host memory buffer for IO 00:07:39.741 Hello world! 00:07:39.741 00:07:39.741 real 0m0.231s 00:07:39.741 user 0m0.075s 00:07:39.741 sys 0m0.112s 00:07:39.741 ************************************ 00:07:39.741 12:05:40 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:39.741 12:05:40 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:39.741 END TEST nvme_hello_world 00:07:39.741 ************************************ 00:07:39.741 12:05:40 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:39.741 12:05:40 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:39.741 12:05:40 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:39.741 12:05:40 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:39.741 ************************************ 00:07:39.741 START TEST nvme_sgl 00:07:39.741 ************************************ 00:07:39.741 12:05:40 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:39.998 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:07:39.998 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:07:39.998 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:07:39.998 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:07:39.998 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:07:39.998 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:07:39.998 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:07:39.998 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:07:39.998 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:07:39.998 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:07:39.998 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:07:39.998 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:07:39.998 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:07:39.998 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:07:39.998 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:07:39.998 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:07:39.998 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:07:39.998 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:07:39.998 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:07:39.998 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:07:39.998 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:07:39.998 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:07:39.998 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:07:39.998 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:07:39.998 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:07:39.998 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:07:39.998 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:07:39.998 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:07:39.998 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:07:39.998 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:07:39.998 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:07:39.998 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:07:39.998 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:07:39.998 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:07:39.998 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:07:39.998 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:07:39.998 NVMe Readv/Writev Request test 00:07:39.998 Attached to 0000:00:10.0 00:07:39.998 Attached to 0000:00:11.0 00:07:39.998 Attached to 0000:00:13.0 00:07:39.998 Attached to 0000:00:12.0 00:07:39.998 0000:00:10.0: build_io_request_2 test passed 00:07:39.998 0000:00:10.0: build_io_request_4 test passed 00:07:39.998 0000:00:10.0: build_io_request_5 test passed 00:07:39.998 0000:00:10.0: build_io_request_6 test passed 00:07:39.998 0000:00:10.0: build_io_request_7 test passed 00:07:39.998 0000:00:10.0: build_io_request_10 test passed 00:07:39.998 0000:00:11.0: build_io_request_2 test passed 00:07:39.998 0000:00:11.0: build_io_request_4 test passed 00:07:39.998 0000:00:11.0: build_io_request_5 test passed 00:07:39.998 0000:00:11.0: build_io_request_6 test passed 00:07:39.998 0000:00:11.0: build_io_request_7 test passed 00:07:39.998 0000:00:11.0: build_io_request_10 test passed 00:07:39.998 Cleaning up... 00:07:39.998 00:07:39.998 real 0m0.278s 00:07:39.998 user 0m0.148s 00:07:39.998 sys 0m0.089s 00:07:39.998 12:05:41 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:39.998 12:05:41 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:07:39.998 ************************************ 00:07:39.998 END TEST nvme_sgl 00:07:39.998 ************************************ 00:07:40.255 12:05:41 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:40.255 12:05:41 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:40.255 12:05:41 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:40.255 12:05:41 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:40.255 ************************************ 00:07:40.255 START TEST nvme_e2edp 00:07:40.255 ************************************ 00:07:40.255 12:05:41 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:40.255 NVMe Write/Read with End-to-End data protection test 00:07:40.255 Attached to 0000:00:10.0 00:07:40.255 Attached to 0000:00:11.0 00:07:40.255 Attached to 0000:00:13.0 00:07:40.255 Attached to 0000:00:12.0 00:07:40.255 Cleaning up... 00:07:40.512 00:07:40.512 real 0m0.244s 00:07:40.512 user 0m0.091s 00:07:40.512 sys 0m0.109s 00:07:40.512 ************************************ 00:07:40.512 END TEST nvme_e2edp 00:07:40.512 ************************************ 00:07:40.512 12:05:41 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.512 12:05:41 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:07:40.512 12:05:41 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:40.512 12:05:41 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:40.512 12:05:41 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:40.512 12:05:41 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:40.512 ************************************ 00:07:40.512 START TEST nvme_reserve 00:07:40.512 ************************************ 00:07:40.512 12:05:41 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:40.512 ===================================================== 00:07:40.512 NVMe Controller at PCI bus 0, device 16, function 0 00:07:40.512 ===================================================== 00:07:40.512 Reservations: Not Supported 00:07:40.512 ===================================================== 00:07:40.512 NVMe Controller at PCI bus 0, device 17, function 0 00:07:40.512 ===================================================== 00:07:40.512 Reservations: Not Supported 00:07:40.512 ===================================================== 00:07:40.512 NVMe Controller at PCI bus 0, device 19, function 0 00:07:40.512 ===================================================== 00:07:40.512 Reservations: Not Supported 00:07:40.512 ===================================================== 00:07:40.512 NVMe Controller at PCI bus 0, device 18, function 0 00:07:40.512 ===================================================== 00:07:40.512 Reservations: Not Supported 00:07:40.513 Reservation test passed 00:07:40.513 00:07:40.513 real 0m0.214s 00:07:40.513 user 0m0.073s 00:07:40.513 sys 0m0.098s 00:07:40.513 12:05:41 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.513 12:05:41 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:07:40.513 ************************************ 00:07:40.513 END TEST nvme_reserve 00:07:40.513 ************************************ 00:07:40.770 12:05:41 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:40.770 12:05:41 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:40.770 12:05:41 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:40.770 12:05:41 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:40.770 ************************************ 00:07:40.770 START TEST nvme_err_injection 00:07:40.770 ************************************ 00:07:40.770 12:05:41 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:41.027 NVMe Error Injection test 00:07:41.027 Attached to 0000:00:10.0 00:07:41.027 Attached to 0000:00:11.0 00:07:41.027 Attached to 0000:00:13.0 00:07:41.027 Attached to 0000:00:12.0 00:07:41.027 0000:00:10.0: get features failed as expected 00:07:41.027 0000:00:11.0: get features failed as expected 00:07:41.027 0000:00:13.0: get features failed as expected 00:07:41.027 0000:00:12.0: get features failed as expected 00:07:41.027 0000:00:10.0: get features successfully as expected 00:07:41.027 0000:00:11.0: get features successfully as expected 00:07:41.027 0000:00:13.0: get features successfully as expected 00:07:41.027 0000:00:12.0: get features successfully as expected 00:07:41.027 0000:00:10.0: read failed as expected 00:07:41.027 0000:00:11.0: read failed as expected 00:07:41.027 0000:00:13.0: read failed as expected 00:07:41.027 0000:00:12.0: read failed as expected 00:07:41.027 0000:00:10.0: read successfully as expected 00:07:41.027 0000:00:11.0: read successfully as expected 00:07:41.027 0000:00:13.0: read successfully as expected 00:07:41.027 0000:00:12.0: read successfully as expected 00:07:41.027 Cleaning up... 00:07:41.027 00:07:41.027 real 0m0.254s 00:07:41.027 user 0m0.082s 00:07:41.027 sys 0m0.108s 00:07:41.027 12:05:41 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:41.027 12:05:41 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:07:41.027 ************************************ 00:07:41.027 END TEST nvme_err_injection 00:07:41.027 ************************************ 00:07:41.027 12:05:41 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:41.027 12:05:41 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:07:41.027 12:05:41 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.027 12:05:41 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:41.027 ************************************ 00:07:41.027 START TEST nvme_overhead 00:07:41.027 ************************************ 00:07:41.027 12:05:41 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:42.406 Initializing NVMe Controllers 00:07:42.406 Attached to 0000:00:10.0 00:07:42.406 Attached to 0000:00:11.0 00:07:42.406 Attached to 0000:00:13.0 00:07:42.406 Attached to 0000:00:12.0 00:07:42.406 Initialization complete. Launching workers. 00:07:42.406 submit (in ns) avg, min, max = 11687.3, 10139.2, 427405.4 00:07:42.406 complete (in ns) avg, min, max = 7826.0, 7275.4, 110616.2 00:07:42.406 00:07:42.406 Submit histogram 00:07:42.406 ================ 00:07:42.406 Range in us Cumulative Count 00:07:42.406 10.092 - 10.142: 0.0064% ( 1) 00:07:42.406 10.289 - 10.338: 0.0127% ( 1) 00:07:42.406 10.338 - 10.388: 0.0191% ( 1) 00:07:42.406 10.388 - 10.437: 0.0255% ( 1) 00:07:42.406 10.437 - 10.486: 0.0318% ( 1) 00:07:42.406 10.732 - 10.782: 0.0382% ( 1) 00:07:42.406 10.782 - 10.831: 0.0955% ( 9) 00:07:42.406 10.831 - 10.880: 0.3118% ( 34) 00:07:42.406 10.880 - 10.929: 0.9355% ( 98) 00:07:42.406 10.929 - 10.978: 2.6157% ( 264) 00:07:42.406 10.978 - 11.028: 6.1350% ( 553) 00:07:42.406 11.028 - 11.077: 12.1301% ( 942) 00:07:42.406 11.077 - 11.126: 21.5490% ( 1480) 00:07:42.406 11.126 - 11.175: 33.1000% ( 1815) 00:07:42.406 11.175 - 11.225: 43.9572% ( 1706) 00:07:42.406 11.225 - 11.274: 52.9434% ( 1412) 00:07:42.406 11.274 - 11.323: 59.4030% ( 1015) 00:07:42.406 11.323 - 11.372: 63.3934% ( 627) 00:07:42.406 11.372 - 11.422: 66.3782% ( 469) 00:07:42.406 11.422 - 11.471: 68.7647% ( 375) 00:07:42.406 11.471 - 11.520: 70.5976% ( 288) 00:07:42.406 11.520 - 11.569: 71.9913% ( 219) 00:07:42.406 11.569 - 11.618: 72.9842% ( 156) 00:07:42.406 11.618 - 11.668: 73.9133% ( 146) 00:07:42.406 11.668 - 11.717: 74.8489% ( 147) 00:07:42.406 11.717 - 11.766: 75.5744% ( 114) 00:07:42.406 11.766 - 11.815: 76.3190% ( 117) 00:07:42.406 11.815 - 11.865: 77.5600% ( 195) 00:07:42.406 11.865 - 11.914: 79.1128% ( 244) 00:07:42.406 11.914 - 11.963: 81.2321% ( 333) 00:07:42.406 11.963 - 12.012: 83.8732% ( 415) 00:07:42.406 12.012 - 12.062: 86.1707% ( 361) 00:07:42.406 12.062 - 12.111: 88.1690% ( 314) 00:07:42.406 12.111 - 12.160: 90.0083% ( 289) 00:07:42.406 12.160 - 12.209: 91.4402% ( 225) 00:07:42.406 12.209 - 12.258: 92.7130% ( 200) 00:07:42.406 12.258 - 12.308: 93.5658% ( 134) 00:07:42.406 12.308 - 12.357: 94.1768% ( 96) 00:07:42.406 12.357 - 12.406: 94.7623% ( 92) 00:07:42.406 12.406 - 12.455: 95.1951% ( 68) 00:07:42.406 12.455 - 12.505: 95.5133% ( 50) 00:07:42.406 12.505 - 12.554: 95.6342% ( 19) 00:07:42.406 12.554 - 12.603: 95.7233% ( 14) 00:07:42.406 12.603 - 12.702: 95.8633% ( 22) 00:07:42.406 12.702 - 12.800: 95.9588% ( 15) 00:07:42.406 12.800 - 12.898: 96.0097% ( 8) 00:07:42.406 12.898 - 12.997: 96.0606% ( 8) 00:07:42.406 12.997 - 13.095: 96.0924% ( 5) 00:07:42.406 13.095 - 13.194: 96.1560% ( 10) 00:07:42.406 13.194 - 13.292: 96.2451% ( 14) 00:07:42.406 13.292 - 13.391: 96.3915% ( 23) 00:07:42.406 13.391 - 13.489: 96.4933% ( 16) 00:07:42.406 13.489 - 13.588: 96.5952% ( 16) 00:07:42.406 13.588 - 13.686: 96.6652% ( 11) 00:07:42.406 13.686 - 13.785: 96.7416% ( 12) 00:07:42.406 13.785 - 13.883: 96.8434% ( 16) 00:07:42.406 13.883 - 13.982: 96.9516% ( 17) 00:07:42.406 13.982 - 14.080: 96.9961% ( 7) 00:07:42.406 14.080 - 14.178: 97.0979% ( 16) 00:07:42.406 14.178 - 14.277: 97.1680% ( 11) 00:07:42.406 14.277 - 14.375: 97.1998% ( 5) 00:07:42.406 14.375 - 14.474: 97.2634% ( 10) 00:07:42.406 14.474 - 14.572: 97.2761% ( 2) 00:07:42.406 14.572 - 14.671: 97.3525% ( 12) 00:07:42.406 14.671 - 14.769: 97.4352% ( 13) 00:07:42.406 14.769 - 14.868: 97.5307% ( 15) 00:07:42.406 14.868 - 14.966: 97.6262% ( 15) 00:07:42.406 14.966 - 15.065: 97.6898% ( 10) 00:07:42.406 15.065 - 15.163: 97.7407% ( 8) 00:07:42.406 15.163 - 15.262: 97.8235% ( 13) 00:07:42.407 15.262 - 15.360: 97.8871% ( 10) 00:07:42.407 15.360 - 15.458: 97.9571% ( 11) 00:07:42.407 15.458 - 15.557: 97.9889% ( 5) 00:07:42.407 15.557 - 15.655: 98.0717% ( 13) 00:07:42.407 15.655 - 15.754: 98.1098% ( 6) 00:07:42.407 15.754 - 15.852: 98.1417% ( 5) 00:07:42.407 15.852 - 15.951: 98.1480% ( 1) 00:07:42.407 15.951 - 16.049: 98.1799% ( 5) 00:07:42.407 16.049 - 16.148: 98.2117% ( 5) 00:07:42.407 16.148 - 16.246: 98.2435% ( 5) 00:07:42.407 16.246 - 16.345: 98.2689% ( 4) 00:07:42.407 16.345 - 16.443: 98.2817% ( 2) 00:07:42.407 16.443 - 16.542: 98.3326% ( 8) 00:07:42.407 16.542 - 16.640: 98.3708% ( 6) 00:07:42.407 16.640 - 16.738: 98.3962% ( 4) 00:07:42.407 16.738 - 16.837: 98.4599% ( 10) 00:07:42.407 16.837 - 16.935: 98.5235% ( 10) 00:07:42.407 16.935 - 17.034: 98.6190% ( 15) 00:07:42.407 17.034 - 17.132: 98.6635% ( 7) 00:07:42.407 17.132 - 17.231: 98.7017% ( 6) 00:07:42.407 17.231 - 17.329: 98.7208% ( 3) 00:07:42.407 17.329 - 17.428: 98.7717% ( 8) 00:07:42.407 17.428 - 17.526: 98.8481% ( 12) 00:07:42.407 17.526 - 17.625: 98.9117% ( 10) 00:07:42.407 17.625 - 17.723: 99.0199% ( 17) 00:07:42.407 17.723 - 17.822: 99.0645% ( 7) 00:07:42.407 17.822 - 17.920: 99.1536% ( 14) 00:07:42.407 17.920 - 18.018: 99.2172% ( 10) 00:07:42.407 18.018 - 18.117: 99.2809% ( 10) 00:07:42.407 18.117 - 18.215: 99.3318% ( 8) 00:07:42.407 18.215 - 18.314: 99.3699% ( 6) 00:07:42.407 18.314 - 18.412: 99.3954% ( 4) 00:07:42.407 18.412 - 18.511: 99.4336% ( 6) 00:07:42.407 18.511 - 18.609: 99.4654% ( 5) 00:07:42.407 18.609 - 18.708: 99.4845% ( 3) 00:07:42.407 18.708 - 18.806: 99.4972% ( 2) 00:07:42.407 18.806 - 18.905: 99.5227% ( 4) 00:07:42.407 18.905 - 19.003: 99.5354% ( 2) 00:07:42.407 19.003 - 19.102: 99.5800% ( 7) 00:07:42.407 19.102 - 19.200: 99.5863% ( 1) 00:07:42.407 19.200 - 19.298: 99.5927% ( 1) 00:07:42.407 19.298 - 19.397: 99.6054% ( 2) 00:07:42.407 19.495 - 19.594: 99.6245% ( 3) 00:07:42.407 19.594 - 19.692: 99.6309% ( 1) 00:07:42.407 19.791 - 19.889: 99.6372% ( 1) 00:07:42.407 19.889 - 19.988: 99.6436% ( 1) 00:07:42.407 19.988 - 20.086: 99.6563% ( 2) 00:07:42.407 20.185 - 20.283: 99.6754% ( 3) 00:07:42.407 20.283 - 20.382: 99.6818% ( 1) 00:07:42.407 20.480 - 20.578: 99.6882% ( 1) 00:07:42.407 20.578 - 20.677: 99.6945% ( 1) 00:07:42.407 20.775 - 20.874: 99.7200% ( 4) 00:07:42.407 21.071 - 21.169: 99.7263% ( 1) 00:07:42.407 21.169 - 21.268: 99.7391% ( 2) 00:07:42.407 21.563 - 21.662: 99.7454% ( 1) 00:07:42.407 21.662 - 21.760: 99.7518% ( 1) 00:07:42.407 21.957 - 22.055: 99.7582% ( 1) 00:07:42.407 22.055 - 22.154: 99.7645% ( 1) 00:07:42.407 22.154 - 22.252: 99.7773% ( 2) 00:07:42.407 22.351 - 22.449: 99.7836% ( 1) 00:07:42.407 22.548 - 22.646: 99.7900% ( 1) 00:07:42.407 22.745 - 22.843: 99.7963% ( 1) 00:07:42.407 23.040 - 23.138: 99.8027% ( 1) 00:07:42.407 23.237 - 23.335: 99.8091% ( 1) 00:07:42.407 23.335 - 23.434: 99.8218% ( 2) 00:07:42.407 23.434 - 23.532: 99.8282% ( 1) 00:07:42.407 23.631 - 23.729: 99.8345% ( 1) 00:07:42.407 23.729 - 23.828: 99.8409% ( 1) 00:07:42.407 24.123 - 24.222: 99.8536% ( 2) 00:07:42.407 24.222 - 24.320: 99.8664% ( 2) 00:07:42.407 24.320 - 24.418: 99.8727% ( 1) 00:07:42.407 24.418 - 24.517: 99.8791% ( 1) 00:07:42.407 24.517 - 24.615: 99.8854% ( 1) 00:07:42.407 24.615 - 24.714: 99.8918% ( 1) 00:07:42.407 25.108 - 25.206: 99.8982% ( 1) 00:07:42.407 25.206 - 25.403: 99.9045% ( 1) 00:07:42.407 25.403 - 25.600: 99.9109% ( 1) 00:07:42.407 25.600 - 25.797: 99.9173% ( 1) 00:07:42.407 29.145 - 29.342: 99.9236% ( 1) 00:07:42.407 29.735 - 29.932: 99.9300% ( 1) 00:07:42.407 30.917 - 31.114: 99.9364% ( 1) 00:07:42.407 38.991 - 39.188: 99.9427% ( 1) 00:07:42.407 42.535 - 42.732: 99.9491% ( 1) 00:07:42.407 43.126 - 43.323: 99.9555% ( 1) 00:07:42.407 49.625 - 49.822: 99.9618% ( 1) 00:07:42.407 51.200 - 51.594: 99.9682% ( 1) 00:07:42.407 60.258 - 60.652: 99.9745% ( 1) 00:07:42.407 64.591 - 64.985: 99.9809% ( 1) 00:07:42.407 398.572 - 400.148: 99.9873% ( 1) 00:07:42.407 412.751 - 415.902: 99.9936% ( 1) 00:07:42.407 425.354 - 428.505: 100.0000% ( 1) 00:07:42.407 00:07:42.407 Complete histogram 00:07:42.407 ================== 00:07:42.407 Range in us Cumulative Count 00:07:42.407 7.237 - 7.286: 0.0064% ( 1) 00:07:42.407 7.286 - 7.335: 0.0445% ( 6) 00:07:42.407 7.335 - 7.385: 0.4582% ( 65) 00:07:42.407 7.385 - 7.434: 4.0412% ( 563) 00:07:42.407 7.434 - 7.483: 14.1284% ( 1585) 00:07:42.407 7.483 - 7.532: 27.4995% ( 2101) 00:07:42.407 7.532 - 7.582: 41.1315% ( 2142) 00:07:42.407 7.582 - 7.631: 53.3253% ( 1916) 00:07:42.407 7.631 - 7.680: 63.8516% ( 1654) 00:07:42.407 7.680 - 7.729: 72.1250% ( 1300) 00:07:42.407 7.729 - 7.778: 78.5528% ( 1010) 00:07:42.407 7.778 - 7.828: 83.1668% ( 725) 00:07:42.407 7.828 - 7.877: 86.4316% ( 513) 00:07:42.407 7.877 - 7.926: 88.9836% ( 401) 00:07:42.407 7.926 - 7.975: 90.9438% ( 308) 00:07:42.407 7.975 - 8.025: 92.2357% ( 203) 00:07:42.407 8.025 - 8.074: 93.2731% ( 163) 00:07:42.407 8.074 - 8.123: 94.1259% ( 134) 00:07:42.407 8.123 - 8.172: 94.7941% ( 105) 00:07:42.407 8.172 - 8.222: 95.2714% ( 75) 00:07:42.407 8.222 - 8.271: 95.6342% ( 57) 00:07:42.407 8.271 - 8.320: 95.9460% ( 49) 00:07:42.407 8.320 - 8.369: 96.1751% ( 36) 00:07:42.407 8.369 - 8.418: 96.3979% ( 35) 00:07:42.407 8.418 - 8.468: 96.6334% ( 37) 00:07:42.407 8.468 - 8.517: 96.7925% ( 25) 00:07:42.407 8.517 - 8.566: 96.8879% ( 15) 00:07:42.407 8.566 - 8.615: 96.9898% ( 16) 00:07:42.407 8.615 - 8.665: 97.0852% ( 15) 00:07:42.407 8.665 - 8.714: 97.1361% ( 8) 00:07:42.407 8.714 - 8.763: 97.2125% ( 12) 00:07:42.407 8.763 - 8.812: 97.2952% ( 13) 00:07:42.407 8.812 - 8.862: 97.3016% ( 1) 00:07:42.407 8.862 - 8.911: 97.3334% ( 5) 00:07:42.407 8.911 - 8.960: 97.3398% ( 1) 00:07:42.407 8.960 - 9.009: 97.3525% ( 2) 00:07:42.407 9.058 - 9.108: 97.3652% ( 2) 00:07:42.407 9.206 - 9.255: 97.3780% ( 2) 00:07:42.407 9.255 - 9.305: 97.3843% ( 1) 00:07:42.407 9.305 - 9.354: 97.3907% ( 1) 00:07:42.407 9.354 - 9.403: 97.4034% ( 2) 00:07:42.407 9.403 - 9.452: 97.4162% ( 2) 00:07:42.407 9.452 - 9.502: 97.4225% ( 1) 00:07:42.407 9.502 - 9.551: 97.4289% ( 1) 00:07:42.407 9.600 - 9.649: 97.4416% ( 2) 00:07:42.407 9.698 - 9.748: 97.4480% ( 1) 00:07:42.407 9.748 - 9.797: 97.4607% ( 2) 00:07:42.407 9.797 - 9.846: 97.4798% ( 3) 00:07:42.407 9.846 - 9.895: 97.4925% ( 2) 00:07:42.407 9.895 - 9.945: 97.5053% ( 2) 00:07:42.407 9.945 - 9.994: 97.5180% ( 2) 00:07:42.407 9.994 - 10.043: 97.5307% ( 2) 00:07:42.407 10.043 - 10.092: 97.5625% ( 5) 00:07:42.407 10.092 - 10.142: 97.6007% ( 6) 00:07:42.407 10.142 - 10.191: 97.6389% ( 6) 00:07:42.407 10.191 - 10.240: 97.6580% ( 3) 00:07:42.407 10.240 - 10.289: 97.6707% ( 2) 00:07:42.407 10.289 - 10.338: 97.7025% ( 5) 00:07:42.407 10.338 - 10.388: 97.7216% ( 3) 00:07:42.407 10.388 - 10.437: 97.7407% ( 3) 00:07:42.407 10.437 - 10.486: 97.7662% ( 4) 00:07:42.407 10.486 - 10.535: 97.8107% ( 7) 00:07:42.407 10.535 - 10.585: 97.8489% ( 6) 00:07:42.407 10.585 - 10.634: 97.8680% ( 3) 00:07:42.407 10.634 - 10.683: 97.8998% ( 5) 00:07:42.407 10.683 - 10.732: 97.9698% ( 11) 00:07:42.407 10.732 - 10.782: 98.0271% ( 9) 00:07:42.407 10.782 - 10.831: 98.0462% ( 3) 00:07:42.407 10.831 - 10.880: 98.0717% ( 4) 00:07:42.407 10.880 - 10.929: 98.0971% ( 4) 00:07:42.407 10.929 - 10.978: 98.1098% ( 2) 00:07:42.407 10.978 - 11.028: 98.1544% ( 7) 00:07:42.407 11.028 - 11.077: 98.1671% ( 2) 00:07:42.407 11.077 - 11.126: 98.1926% ( 4) 00:07:42.407 11.126 - 11.175: 98.2244% ( 5) 00:07:42.407 11.175 - 11.225: 98.2308% ( 1) 00:07:42.407 11.225 - 11.274: 98.2626% ( 5) 00:07:42.407 11.274 - 11.323: 98.2880% ( 4) 00:07:42.407 11.323 - 11.372: 98.3008% ( 2) 00:07:42.407 11.372 - 11.422: 98.3135% ( 2) 00:07:42.407 11.422 - 11.471: 98.3326% ( 3) 00:07:42.407 11.520 - 11.569: 98.3390% ( 1) 00:07:42.407 11.569 - 11.618: 98.3453% ( 1) 00:07:42.407 11.668 - 11.717: 98.3580% ( 2) 00:07:42.407 11.717 - 11.766: 98.3644% ( 1) 00:07:42.407 11.914 - 11.963: 98.3708% ( 1) 00:07:42.407 11.963 - 12.012: 98.3899% ( 3) 00:07:42.407 12.111 - 12.160: 98.3962% ( 1) 00:07:42.407 12.160 - 12.209: 98.4090% ( 2) 00:07:42.407 12.209 - 12.258: 98.4153% ( 1) 00:07:42.407 12.258 - 12.308: 98.4217% ( 1) 00:07:42.407 12.308 - 12.357: 98.4281% ( 1) 00:07:42.407 12.357 - 12.406: 98.4344% ( 1) 00:07:42.407 12.603 - 12.702: 98.4408% ( 1) 00:07:42.407 12.898 - 12.997: 98.4535% ( 2) 00:07:42.407 12.997 - 13.095: 98.4662% ( 2) 00:07:42.408 13.095 - 13.194: 98.5362% ( 11) 00:07:42.408 13.194 - 13.292: 98.5872% ( 8) 00:07:42.408 13.292 - 13.391: 98.6317% ( 7) 00:07:42.408 13.391 - 13.489: 98.6508% ( 3) 00:07:42.408 13.489 - 13.588: 98.7335% ( 13) 00:07:42.408 13.588 - 13.686: 98.7717% ( 6) 00:07:42.408 13.686 - 13.785: 98.8163% ( 7) 00:07:42.408 13.785 - 13.883: 98.8608% ( 7) 00:07:42.408 13.883 - 13.982: 98.9117% ( 8) 00:07:42.408 13.982 - 14.080: 98.9817% ( 11) 00:07:42.408 14.080 - 14.178: 99.0326% ( 8) 00:07:42.408 14.178 - 14.277: 99.1027% ( 11) 00:07:42.408 14.277 - 14.375: 99.1663% ( 10) 00:07:42.408 14.375 - 14.474: 99.2427% ( 12) 00:07:42.408 14.474 - 14.572: 99.3127% ( 11) 00:07:42.408 14.572 - 14.671: 99.3699% ( 9) 00:07:42.408 14.671 - 14.769: 99.4018% ( 5) 00:07:42.408 14.769 - 14.868: 99.4336% ( 5) 00:07:42.408 14.868 - 14.966: 99.4781% ( 7) 00:07:42.408 14.966 - 15.065: 99.5100% ( 5) 00:07:42.408 15.065 - 15.163: 99.5227% ( 2) 00:07:42.408 15.163 - 15.262: 99.5736% ( 8) 00:07:42.408 15.262 - 15.360: 99.5927% ( 3) 00:07:42.408 15.360 - 15.458: 99.6245% ( 5) 00:07:42.408 15.458 - 15.557: 99.6372% ( 2) 00:07:42.408 15.557 - 15.655: 99.6563% ( 3) 00:07:42.408 15.655 - 15.754: 99.6627% ( 1) 00:07:42.408 15.754 - 15.852: 99.6691% ( 1) 00:07:42.408 16.148 - 16.246: 99.6754% ( 1) 00:07:42.408 16.246 - 16.345: 99.6818% ( 1) 00:07:42.408 16.345 - 16.443: 99.6882% ( 1) 00:07:42.408 16.542 - 16.640: 99.6945% ( 1) 00:07:42.408 16.837 - 16.935: 99.7009% ( 1) 00:07:42.408 17.034 - 17.132: 99.7072% ( 1) 00:07:42.408 17.329 - 17.428: 99.7136% ( 1) 00:07:42.408 17.723 - 17.822: 99.7263% ( 2) 00:07:42.408 18.117 - 18.215: 99.7327% ( 1) 00:07:42.408 18.215 - 18.314: 99.7391% ( 1) 00:07:42.408 18.609 - 18.708: 99.7454% ( 1) 00:07:42.408 19.200 - 19.298: 99.7582% ( 2) 00:07:42.408 19.594 - 19.692: 99.7645% ( 1) 00:07:42.408 19.791 - 19.889: 99.7773% ( 2) 00:07:42.408 19.988 - 20.086: 99.7836% ( 1) 00:07:42.408 20.086 - 20.185: 99.7963% ( 2) 00:07:42.408 20.185 - 20.283: 99.8027% ( 1) 00:07:42.408 20.382 - 20.480: 99.8154% ( 2) 00:07:42.408 20.480 - 20.578: 99.8218% ( 1) 00:07:42.408 20.578 - 20.677: 99.8282% ( 1) 00:07:42.408 20.677 - 20.775: 99.8345% ( 1) 00:07:42.408 21.169 - 21.268: 99.8473% ( 2) 00:07:42.408 21.268 - 21.366: 99.8600% ( 2) 00:07:42.408 21.366 - 21.465: 99.8727% ( 2) 00:07:42.408 21.465 - 21.563: 99.8791% ( 1) 00:07:42.408 21.563 - 21.662: 99.8854% ( 1) 00:07:42.408 21.957 - 22.055: 99.8918% ( 1) 00:07:42.408 22.055 - 22.154: 99.8982% ( 1) 00:07:42.408 22.154 - 22.252: 99.9045% ( 1) 00:07:42.408 23.631 - 23.729: 99.9109% ( 1) 00:07:42.408 23.729 - 23.828: 99.9173% ( 1) 00:07:42.408 23.828 - 23.926: 99.9236% ( 1) 00:07:42.408 23.926 - 24.025: 99.9300% ( 1) 00:07:42.408 24.123 - 24.222: 99.9364% ( 1) 00:07:42.408 24.222 - 24.320: 99.9427% ( 1) 00:07:42.408 24.911 - 25.009: 99.9491% ( 1) 00:07:42.408 27.175 - 27.372: 99.9555% ( 1) 00:07:42.408 28.160 - 28.357: 99.9618% ( 1) 00:07:42.408 30.917 - 31.114: 99.9682% ( 1) 00:07:42.408 41.551 - 41.748: 99.9745% ( 1) 00:07:42.408 45.489 - 45.686: 99.9809% ( 1) 00:07:42.408 52.382 - 52.775: 99.9873% ( 1) 00:07:42.408 61.046 - 61.440: 99.9936% ( 1) 00:07:42.408 110.277 - 111.065: 100.0000% ( 1) 00:07:42.408 00:07:42.408 00:07:42.408 real 0m1.233s 00:07:42.408 user 0m1.070s 00:07:42.408 sys 0m0.110s 00:07:42.408 12:05:43 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:42.408 12:05:43 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:07:42.408 ************************************ 00:07:42.408 END TEST nvme_overhead 00:07:42.408 ************************************ 00:07:42.408 12:05:43 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:42.408 12:05:43 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:42.408 12:05:43 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:42.408 12:05:43 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:42.408 ************************************ 00:07:42.408 START TEST nvme_arbitration 00:07:42.408 ************************************ 00:07:42.408 12:05:43 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:45.688 Initializing NVMe Controllers 00:07:45.688 Attached to 0000:00:10.0 00:07:45.688 Attached to 0000:00:11.0 00:07:45.688 Attached to 0000:00:13.0 00:07:45.688 Attached to 0000:00:12.0 00:07:45.688 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:07:45.688 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:07:45.688 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:07:45.688 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:07:45.688 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:07:45.688 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:07:45.688 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:07:45.688 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:07:45.688 Initialization complete. Launching workers. 00:07:45.688 Starting thread on core 1 with urgent priority queue 00:07:45.688 Starting thread on core 2 with urgent priority queue 00:07:45.688 Starting thread on core 3 with urgent priority queue 00:07:45.688 Starting thread on core 0 with urgent priority queue 00:07:45.688 QEMU NVMe Ctrl (12340 ) core 0: 853.33 IO/s 117.19 secs/100000 ios 00:07:45.688 QEMU NVMe Ctrl (12342 ) core 0: 853.33 IO/s 117.19 secs/100000 ios 00:07:45.688 QEMU NVMe Ctrl (12341 ) core 1: 853.33 IO/s 117.19 secs/100000 ios 00:07:45.688 QEMU NVMe Ctrl (12342 ) core 1: 853.33 IO/s 117.19 secs/100000 ios 00:07:45.688 QEMU NVMe Ctrl (12343 ) core 2: 896.00 IO/s 111.61 secs/100000 ios 00:07:45.688 QEMU NVMe Ctrl (12342 ) core 3: 938.67 IO/s 106.53 secs/100000 ios 00:07:45.688 ======================================================== 00:07:45.688 00:07:45.688 00:07:45.688 real 0m3.308s 00:07:45.688 user 0m9.220s 00:07:45.688 sys 0m0.117s 00:07:45.688 12:05:46 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.688 12:05:46 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:07:45.688 ************************************ 00:07:45.688 END TEST nvme_arbitration 00:07:45.688 ************************************ 00:07:45.688 12:05:46 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:45.688 12:05:46 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:45.688 12:05:46 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.689 12:05:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:45.689 ************************************ 00:07:45.689 START TEST nvme_single_aen 00:07:45.689 ************************************ 00:07:45.689 12:05:46 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:45.689 Asynchronous Event Request test 00:07:45.689 Attached to 0000:00:10.0 00:07:45.689 Attached to 0000:00:11.0 00:07:45.689 Attached to 0000:00:13.0 00:07:45.689 Attached to 0000:00:12.0 00:07:45.689 Reset controller to setup AER completions for this process 00:07:45.689 Registering asynchronous event callbacks... 00:07:45.689 Getting orig temperature thresholds of all controllers 00:07:45.689 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:45.689 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:45.689 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:45.689 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:45.689 Setting all controllers temperature threshold low to trigger AER 00:07:45.689 Waiting for all controllers temperature threshold to be set lower 00:07:45.689 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:45.689 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:07:45.689 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:45.689 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:07:45.689 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:45.689 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:07:45.689 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:45.689 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:07:45.689 Waiting for all controllers to trigger AER and reset threshold 00:07:45.689 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:45.689 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:45.689 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:45.689 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:45.689 Cleaning up... 00:07:45.689 00:07:45.689 real 0m0.209s 00:07:45.689 user 0m0.069s 00:07:45.689 sys 0m0.097s 00:07:45.689 12:05:46 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.689 ************************************ 00:07:45.689 END TEST nvme_single_aen 00:07:45.689 ************************************ 00:07:45.689 12:05:46 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:07:45.946 12:05:46 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:07:45.946 12:05:46 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:45.946 12:05:46 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.946 12:05:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:45.946 ************************************ 00:07:45.946 START TEST nvme_doorbell_aers 00:07:45.946 ************************************ 00:07:45.946 12:05:46 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:07:45.946 12:05:46 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:07:45.946 12:05:46 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:07:45.946 12:05:46 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:07:45.946 12:05:46 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:07:45.946 12:05:46 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:45.946 12:05:46 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:07:45.946 12:05:46 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:45.946 12:05:46 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:45.946 12:05:46 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:45.946 12:05:46 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:45.946 12:05:46 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:45.946 12:05:46 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:45.946 12:05:46 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:07:46.346 [2024-11-25 12:05:47.070636] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63427) is not found. Dropping the request. 00:07:56.312 Executing: test_write_invalid_db 00:07:56.312 Waiting for AER completion... 00:07:56.312 Failure: test_write_invalid_db 00:07:56.312 00:07:56.312 Executing: test_invalid_db_write_overflow_sq 00:07:56.312 Waiting for AER completion... 00:07:56.312 Failure: test_invalid_db_write_overflow_sq 00:07:56.312 00:07:56.312 Executing: test_invalid_db_write_overflow_cq 00:07:56.312 Waiting for AER completion... 00:07:56.312 Failure: test_invalid_db_write_overflow_cq 00:07:56.312 00:07:56.312 12:05:56 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:56.312 12:05:56 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:07:56.312 [2024-11-25 12:05:57.122825] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63427) is not found. Dropping the request. 00:08:06.278 Executing: test_write_invalid_db 00:08:06.278 Waiting for AER completion... 00:08:06.278 Failure: test_write_invalid_db 00:08:06.278 00:08:06.278 Executing: test_invalid_db_write_overflow_sq 00:08:06.278 Waiting for AER completion... 00:08:06.278 Failure: test_invalid_db_write_overflow_sq 00:08:06.278 00:08:06.278 Executing: test_invalid_db_write_overflow_cq 00:08:06.278 Waiting for AER completion... 00:08:06.278 Failure: test_invalid_db_write_overflow_cq 00:08:06.278 00:08:06.278 12:06:06 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:06.278 12:06:06 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:06.278 [2024-11-25 12:06:07.137494] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63427) is not found. Dropping the request. 00:08:16.370 Executing: test_write_invalid_db 00:08:16.370 Waiting for AER completion... 00:08:16.370 Failure: test_write_invalid_db 00:08:16.370 00:08:16.370 Executing: test_invalid_db_write_overflow_sq 00:08:16.370 Waiting for AER completion... 00:08:16.370 Failure: test_invalid_db_write_overflow_sq 00:08:16.370 00:08:16.370 Executing: test_invalid_db_write_overflow_cq 00:08:16.370 Waiting for AER completion... 00:08:16.370 Failure: test_invalid_db_write_overflow_cq 00:08:16.370 00:08:16.370 12:06:16 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:16.370 12:06:16 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:16.370 [2024-11-25 12:06:17.154423] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63427) is not found. Dropping the request. 00:08:26.351 Executing: test_write_invalid_db 00:08:26.351 Waiting for AER completion... 00:08:26.351 Failure: test_write_invalid_db 00:08:26.351 00:08:26.351 Executing: test_invalid_db_write_overflow_sq 00:08:26.351 Waiting for AER completion... 00:08:26.351 Failure: test_invalid_db_write_overflow_sq 00:08:26.351 00:08:26.351 Executing: test_invalid_db_write_overflow_cq 00:08:26.351 Waiting for AER completion... 00:08:26.351 Failure: test_invalid_db_write_overflow_cq 00:08:26.351 00:08:26.351 00:08:26.351 real 0m40.206s 00:08:26.351 user 0m34.175s 00:08:26.351 sys 0m5.662s 00:08:26.351 ************************************ 00:08:26.351 END TEST nvme_doorbell_aers 00:08:26.351 ************************************ 00:08:26.351 12:06:26 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:26.351 12:06:26 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:08:26.351 12:06:27 nvme -- nvme/nvme.sh@97 -- # uname 00:08:26.351 12:06:27 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:08:26.351 12:06:27 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:26.351 12:06:27 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:26.351 12:06:27 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:26.351 12:06:27 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:26.351 ************************************ 00:08:26.351 START TEST nvme_multi_aen 00:08:26.351 ************************************ 00:08:26.351 12:06:27 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:26.351 [2024-11-25 12:06:27.220281] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63427) is not found. Dropping the request. 00:08:26.351 [2024-11-25 12:06:27.220346] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63427) is not found. Dropping the request. 00:08:26.351 [2024-11-25 12:06:27.220356] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63427) is not found. Dropping the request. 00:08:26.351 [2024-11-25 12:06:27.221877] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63427) is not found. Dropping the request. 00:08:26.351 [2024-11-25 12:06:27.221918] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63427) is not found. Dropping the request. 00:08:26.351 [2024-11-25 12:06:27.221927] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63427) is not found. Dropping the request. 00:08:26.351 [2024-11-25 12:06:27.223101] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63427) is not found. Dropping the request. 00:08:26.351 [2024-11-25 12:06:27.223131] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63427) is not found. Dropping the request. 00:08:26.351 [2024-11-25 12:06:27.223139] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63427) is not found. Dropping the request. 00:08:26.351 [2024-11-25 12:06:27.224166] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63427) is not found. Dropping the request. 00:08:26.351 [2024-11-25 12:06:27.224193] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63427) is not found. Dropping the request. 00:08:26.351 [2024-11-25 12:06:27.224201] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63427) is not found. Dropping the request. 00:08:26.351 Child process pid: 63953 00:08:26.351 [Child] Asynchronous Event Request test 00:08:26.351 [Child] Attached to 0000:00:10.0 00:08:26.351 [Child] Attached to 0000:00:11.0 00:08:26.351 [Child] Attached to 0000:00:13.0 00:08:26.351 [Child] Attached to 0000:00:12.0 00:08:26.351 [Child] Registering asynchronous event callbacks... 00:08:26.351 [Child] Getting orig temperature thresholds of all controllers 00:08:26.351 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:26.351 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:26.351 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:26.351 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:26.351 [Child] Waiting for all controllers to trigger AER and reset threshold 00:08:26.351 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:26.351 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:26.351 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:26.351 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:26.351 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:26.351 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:26.351 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:26.351 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:26.351 [Child] Cleaning up... 00:08:26.609 Asynchronous Event Request test 00:08:26.609 Attached to 0000:00:10.0 00:08:26.609 Attached to 0000:00:11.0 00:08:26.609 Attached to 0000:00:13.0 00:08:26.609 Attached to 0000:00:12.0 00:08:26.609 Reset controller to setup AER completions for this process 00:08:26.609 Registering asynchronous event callbacks... 00:08:26.609 Getting orig temperature thresholds of all controllers 00:08:26.609 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:26.609 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:26.609 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:26.609 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:26.609 Setting all controllers temperature threshold low to trigger AER 00:08:26.609 Waiting for all controllers temperature threshold to be set lower 00:08:26.609 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:26.609 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:26.609 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:26.609 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:26.609 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:26.609 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:26.609 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:26.609 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:26.609 Waiting for all controllers to trigger AER and reset threshold 00:08:26.609 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:26.609 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:26.609 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:26.609 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:26.609 Cleaning up... 00:08:26.609 00:08:26.609 real 0m0.445s 00:08:26.609 user 0m0.136s 00:08:26.609 sys 0m0.207s 00:08:26.609 12:06:27 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:26.609 12:06:27 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:08:26.609 ************************************ 00:08:26.609 END TEST nvme_multi_aen 00:08:26.609 ************************************ 00:08:26.609 12:06:27 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:26.609 12:06:27 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:26.609 12:06:27 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:26.609 12:06:27 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:26.609 ************************************ 00:08:26.609 START TEST nvme_startup 00:08:26.609 ************************************ 00:08:26.609 12:06:27 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:26.868 Initializing NVMe Controllers 00:08:26.868 Attached to 0000:00:10.0 00:08:26.868 Attached to 0000:00:11.0 00:08:26.868 Attached to 0000:00:13.0 00:08:26.868 Attached to 0000:00:12.0 00:08:26.868 Initialization complete. 00:08:26.868 Time used:148216.719 (us). 00:08:26.868 00:08:26.868 real 0m0.214s 00:08:26.868 user 0m0.081s 00:08:26.868 sys 0m0.089s 00:08:26.868 12:06:27 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:26.868 ************************************ 00:08:26.868 END TEST nvme_startup 00:08:26.868 ************************************ 00:08:26.868 12:06:27 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:08:26.868 12:06:27 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:08:26.868 12:06:27 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:26.868 12:06:27 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:26.868 12:06:27 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:26.868 ************************************ 00:08:26.868 START TEST nvme_multi_secondary 00:08:26.868 ************************************ 00:08:26.868 12:06:27 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:08:26.868 12:06:27 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=63998 00:08:26.868 12:06:27 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=63999 00:08:26.868 12:06:27 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:08:26.868 12:06:27 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:08:26.868 12:06:27 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:30.159 Initializing NVMe Controllers 00:08:30.159 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:30.159 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:30.159 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:30.159 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:30.159 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:30.159 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:30.159 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:30.159 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:30.159 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:30.159 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:30.159 Initialization complete. Launching workers. 00:08:30.159 ======================================================== 00:08:30.159 Latency(us) 00:08:30.159 Device Information : IOPS MiB/s Average min max 00:08:30.159 PCIE (0000:00:10.0) NSID 1 from core 1: 7761.69 30.32 2059.95 682.64 7415.05 00:08:30.159 PCIE (0000:00:11.0) NSID 1 from core 1: 7761.69 30.32 2061.06 711.96 6786.44 00:08:30.159 PCIE (0000:00:13.0) NSID 1 from core 1: 7761.69 30.32 2061.04 746.88 6786.17 00:08:30.159 PCIE (0000:00:12.0) NSID 1 from core 1: 7761.69 30.32 2061.02 747.41 6327.43 00:08:30.159 PCIE (0000:00:12.0) NSID 2 from core 1: 7761.69 30.32 2060.99 737.47 6153.49 00:08:30.159 PCIE (0000:00:12.0) NSID 3 from core 1: 7761.69 30.32 2060.96 726.12 7815.98 00:08:30.159 ======================================================== 00:08:30.159 Total : 46570.11 181.91 2060.84 682.64 7815.98 00:08:30.159 00:08:30.159 Initializing NVMe Controllers 00:08:30.159 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:30.159 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:30.159 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:30.159 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:30.159 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:30.159 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:30.159 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:30.159 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:30.159 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:30.159 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:30.159 Initialization complete. Launching workers. 00:08:30.159 ======================================================== 00:08:30.159 Latency(us) 00:08:30.159 Device Information : IOPS MiB/s Average min max 00:08:30.159 PCIE (0000:00:10.0) NSID 1 from core 2: 3353.47 13.10 4769.53 1016.12 16254.81 00:08:30.159 PCIE (0000:00:11.0) NSID 1 from core 2: 3353.47 13.10 4770.61 1034.60 13831.23 00:08:30.159 PCIE (0000:00:13.0) NSID 1 from core 2: 3353.47 13.10 4770.82 1048.72 14027.59 00:08:30.159 PCIE (0000:00:12.0) NSID 1 from core 2: 3353.47 13.10 4770.87 1180.73 13868.88 00:08:30.159 PCIE (0000:00:12.0) NSID 2 from core 2: 3353.47 13.10 4770.88 1045.87 12623.31 00:08:30.159 PCIE (0000:00:12.0) NSID 3 from core 2: 3353.47 13.10 4771.34 1049.73 13695.79 00:08:30.159 ======================================================== 00:08:30.159 Total : 20120.82 78.60 4770.67 1016.12 16254.81 00:08:30.159 00:08:30.417 12:06:31 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 63998 00:08:32.314 Initializing NVMe Controllers 00:08:32.314 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:32.314 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:32.314 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:32.314 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:32.314 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:32.314 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:32.314 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:32.314 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:32.314 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:32.314 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:32.314 Initialization complete. Launching workers. 00:08:32.314 ======================================================== 00:08:32.314 Latency(us) 00:08:32.314 Device Information : IOPS MiB/s Average min max 00:08:32.314 PCIE (0000:00:10.0) NSID 1 from core 0: 10833.70 42.32 1475.56 682.12 5508.25 00:08:32.314 PCIE (0000:00:11.0) NSID 1 from core 0: 10833.70 42.32 1476.33 703.01 5700.56 00:08:32.314 PCIE (0000:00:13.0) NSID 1 from core 0: 10833.70 42.32 1476.29 646.49 6590.38 00:08:32.314 PCIE (0000:00:12.0) NSID 1 from core 0: 10833.70 42.32 1476.24 622.23 6199.79 00:08:32.314 PCIE (0000:00:12.0) NSID 2 from core 0: 10833.70 42.32 1476.13 610.76 5547.25 00:08:32.314 PCIE (0000:00:12.0) NSID 3 from core 0: 10833.70 42.32 1476.07 570.31 5791.90 00:08:32.314 ======================================================== 00:08:32.314 Total : 65002.20 253.91 1476.10 570.31 6590.38 00:08:32.314 00:08:32.314 12:06:33 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 63999 00:08:32.314 12:06:33 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=64074 00:08:32.314 12:06:33 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:08:32.314 12:06:33 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:32.314 12:06:33 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=64075 00:08:32.314 12:06:33 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:08:35.590 Initializing NVMe Controllers 00:08:35.590 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:35.590 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:35.590 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:35.591 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:35.591 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:35.591 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:35.591 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:35.591 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:35.591 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:35.591 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:35.591 Initialization complete. Launching workers. 00:08:35.591 ======================================================== 00:08:35.591 Latency(us) 00:08:35.591 Device Information : IOPS MiB/s Average min max 00:08:35.591 PCIE (0000:00:10.0) NSID 1 from core 1: 8023.18 31.34 1992.85 716.22 6453.10 00:08:35.591 PCIE (0000:00:11.0) NSID 1 from core 1: 8023.18 31.34 1993.81 738.55 5868.72 00:08:35.591 PCIE (0000:00:13.0) NSID 1 from core 1: 8023.18 31.34 1993.78 727.15 6545.95 00:08:35.591 PCIE (0000:00:12.0) NSID 1 from core 1: 8023.18 31.34 1993.73 733.93 6105.37 00:08:35.591 PCIE (0000:00:12.0) NSID 2 from core 1: 8023.18 31.34 1993.80 740.02 6349.93 00:08:35.591 PCIE (0000:00:12.0) NSID 3 from core 1: 8023.18 31.34 1993.76 744.74 6461.52 00:08:35.591 ======================================================== 00:08:35.591 Total : 48139.11 188.04 1993.62 716.22 6545.95 00:08:35.591 00:08:35.591 Initializing NVMe Controllers 00:08:35.591 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:35.591 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:35.591 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:35.591 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:35.591 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:35.591 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:35.591 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:35.591 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:35.591 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:35.591 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:35.591 Initialization complete. Launching workers. 00:08:35.591 ======================================================== 00:08:35.591 Latency(us) 00:08:35.591 Device Information : IOPS MiB/s Average min max 00:08:35.591 PCIE (0000:00:10.0) NSID 1 from core 0: 7943.14 31.03 2012.99 706.56 6105.79 00:08:35.591 PCIE (0000:00:11.0) NSID 1 from core 0: 7943.14 31.03 2014.01 734.14 6288.86 00:08:35.591 PCIE (0000:00:13.0) NSID 1 from core 0: 7943.14 31.03 2014.04 735.32 6112.57 00:08:35.591 PCIE (0000:00:12.0) NSID 1 from core 0: 7943.14 31.03 2014.09 736.46 5999.09 00:08:35.591 PCIE (0000:00:12.0) NSID 2 from core 0: 7943.14 31.03 2014.28 730.14 5552.52 00:08:35.591 PCIE (0000:00:12.0) NSID 3 from core 0: 7943.14 31.03 2014.42 727.76 5549.84 00:08:35.591 ======================================================== 00:08:35.591 Total : 47658.86 186.17 2013.97 706.56 6288.86 00:08:35.591 00:08:37.543 Initializing NVMe Controllers 00:08:37.543 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:37.543 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:37.543 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:37.543 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:37.543 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:37.543 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:37.543 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:37.543 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:37.543 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:37.543 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:37.543 Initialization complete. Launching workers. 00:08:37.543 ======================================================== 00:08:37.543 Latency(us) 00:08:37.543 Device Information : IOPS MiB/s Average min max 00:08:37.543 PCIE (0000:00:10.0) NSID 1 from core 2: 4624.06 18.06 3458.09 715.46 12571.86 00:08:37.543 PCIE (0000:00:11.0) NSID 1 from core 2: 4624.06 18.06 3459.74 726.29 14039.34 00:08:37.543 PCIE (0000:00:13.0) NSID 1 from core 2: 4624.06 18.06 3459.68 742.88 13574.19 00:08:37.543 PCIE (0000:00:12.0) NSID 1 from core 2: 4624.06 18.06 3459.28 753.99 13240.44 00:08:37.543 PCIE (0000:00:12.0) NSID 2 from core 2: 4624.06 18.06 3459.39 689.40 12977.20 00:08:37.543 PCIE (0000:00:12.0) NSID 3 from core 2: 4624.06 18.06 3459.35 631.93 12942.76 00:08:37.543 ======================================================== 00:08:37.543 Total : 27744.38 108.38 3459.26 631.93 14039.34 00:08:37.543 00:08:37.543 12:06:38 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 64074 00:08:37.543 12:06:38 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 64075 00:08:37.543 00:08:37.543 real 0m10.735s 00:08:37.543 user 0m18.426s 00:08:37.543 sys 0m0.645s 00:08:37.543 12:06:38 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:37.543 12:06:38 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:08:37.543 ************************************ 00:08:37.543 END TEST nvme_multi_secondary 00:08:37.543 ************************************ 00:08:37.543 12:06:38 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:08:37.543 12:06:38 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:08:37.543 12:06:38 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/63036 ]] 00:08:37.543 12:06:38 nvme -- common/autotest_common.sh@1094 -- # kill 63036 00:08:37.543 12:06:38 nvme -- common/autotest_common.sh@1095 -- # wait 63036 00:08:37.543 [2024-11-25 12:06:38.529016] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63952) is not found. Dropping the request. 00:08:37.543 [2024-11-25 12:06:38.529064] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63952) is not found. Dropping the request. 00:08:37.543 [2024-11-25 12:06:38.529081] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63952) is not found. Dropping the request. 00:08:37.543 [2024-11-25 12:06:38.529092] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63952) is not found. Dropping the request. 00:08:37.543 [2024-11-25 12:06:38.530529] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63952) is not found. Dropping the request. 00:08:37.543 [2024-11-25 12:06:38.530563] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63952) is not found. Dropping the request. 00:08:37.543 [2024-11-25 12:06:38.530573] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63952) is not found. Dropping the request. 00:08:37.543 [2024-11-25 12:06:38.530583] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63952) is not found. Dropping the request. 00:08:37.543 [2024-11-25 12:06:38.532054] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63952) is not found. Dropping the request. 00:08:37.543 [2024-11-25 12:06:38.532084] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63952) is not found. Dropping the request. 00:08:37.543 [2024-11-25 12:06:38.532093] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63952) is not found. Dropping the request. 00:08:37.543 [2024-11-25 12:06:38.532103] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63952) is not found. Dropping the request. 00:08:37.543 [2024-11-25 12:06:38.533506] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63952) is not found. Dropping the request. 00:08:37.543 [2024-11-25 12:06:38.533542] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63952) is not found. Dropping the request. 00:08:37.543 [2024-11-25 12:06:38.533551] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63952) is not found. Dropping the request. 00:08:37.543 [2024-11-25 12:06:38.533562] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63952) is not found. Dropping the request. 00:08:37.802 12:06:38 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:08:37.802 12:06:38 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:08:37.802 12:06:38 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:37.802 12:06:38 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:37.802 12:06:38 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:37.802 12:06:38 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:37.802 ************************************ 00:08:37.802 START TEST bdev_nvme_reset_stuck_adm_cmd 00:08:37.802 ************************************ 00:08:37.802 12:06:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:37.803 * Looking for test storage... 00:08:37.803 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:37.803 12:06:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:37.803 12:06:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version 00:08:37.803 12:06:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:37.803 12:06:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:37.803 12:06:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:37.803 12:06:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:37.803 12:06:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:37.803 12:06:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:08:37.803 12:06:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:08:37.803 12:06:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:08:37.803 12:06:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:08:37.803 12:06:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:08:37.803 12:06:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:08:37.803 12:06:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:08:37.803 12:06:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:37.803 12:06:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:08:37.803 12:06:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:08:37.803 12:06:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:37.803 12:06:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:37.803 12:06:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:08:37.803 12:06:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:08:37.803 12:06:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:37.803 12:06:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:08:37.803 12:06:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:08:37.803 12:06:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:08:37.803 12:06:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:08:37.803 12:06:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:37.803 12:06:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:08:37.803 12:06:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:08:37.803 12:06:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:37.803 12:06:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:37.803 12:06:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:08:37.803 12:06:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:37.803 12:06:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:37.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.803 --rc genhtml_branch_coverage=1 00:08:37.803 --rc genhtml_function_coverage=1 00:08:37.803 --rc genhtml_legend=1 00:08:37.803 --rc geninfo_all_blocks=1 00:08:37.803 --rc geninfo_unexecuted_blocks=1 00:08:37.803 00:08:37.803 ' 00:08:37.803 12:06:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:37.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.803 --rc genhtml_branch_coverage=1 00:08:37.803 --rc genhtml_function_coverage=1 00:08:37.803 --rc genhtml_legend=1 00:08:37.803 --rc geninfo_all_blocks=1 00:08:37.803 --rc geninfo_unexecuted_blocks=1 00:08:37.803 00:08:37.803 ' 00:08:37.803 12:06:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:37.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.803 --rc genhtml_branch_coverage=1 00:08:37.803 --rc genhtml_function_coverage=1 00:08:37.803 --rc genhtml_legend=1 00:08:37.803 --rc geninfo_all_blocks=1 00:08:37.803 --rc geninfo_unexecuted_blocks=1 00:08:37.803 00:08:37.803 ' 00:08:37.803 12:06:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:37.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.803 --rc genhtml_branch_coverage=1 00:08:37.803 --rc genhtml_function_coverage=1 00:08:37.803 --rc genhtml_legend=1 00:08:37.803 --rc geninfo_all_blocks=1 00:08:37.803 --rc geninfo_unexecuted_blocks=1 00:08:37.803 00:08:37.803 ' 00:08:37.803 12:06:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:08:37.803 12:06:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:08:37.803 12:06:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:08:37.803 12:06:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:08:37.803 12:06:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:08:37.803 12:06:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:08:37.803 12:06:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:08:37.803 12:06:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:08:37.803 12:06:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:08:37.803 12:06:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:08:37.803 12:06:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:37.803 12:06:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:08:37.803 12:06:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:37.803 12:06:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:37.803 12:06:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:37.803 12:06:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:37.803 12:06:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:37.803 12:06:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:08:37.803 12:06:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:08:37.803 12:06:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:08:37.803 12:06:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=64236 00:08:37.803 12:06:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:37.803 12:06:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:08:37.803 12:06:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 64236 00:08:37.803 12:06:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 64236 ']' 00:08:37.803 12:06:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.803 12:06:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:37.803 12:06:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.803 12:06:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:37.803 12:06:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:38.062 [2024-11-25 12:06:38.937292] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:08:38.062 [2024-11-25 12:06:38.937413] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64236 ] 00:08:38.062 [2024-11-25 12:06:39.107485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:38.324 [2024-11-25 12:06:39.215315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:38.324 [2024-11-25 12:06:39.215924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:38.324 [2024-11-25 12:06:39.216270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:38.324 [2024-11-25 12:06:39.216358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.890 12:06:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:38.890 12:06:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:08:38.890 12:06:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:08:38.890 12:06:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.890 12:06:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:38.890 nvme0n1 00:08:38.890 12:06:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.890 12:06:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:08:38.890 12:06:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_AW6SJ.txt 00:08:38.890 12:06:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:08:38.890 12:06:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.890 12:06:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:38.890 true 00:08:38.890 12:06:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.890 12:06:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:08:38.890 12:06:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732536399 00:08:38.890 12:06:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=64259 00:08:38.890 12:06:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:38.890 12:06:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:08:38.890 12:06:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:08:41.417 12:06:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:08:41.417 12:06:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.417 12:06:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:41.417 [2024-11-25 12:06:41.901294] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:08:41.417 [2024-11-25 12:06:41.901837] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:08:41.417 [2024-11-25 12:06:41.901877] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:08:41.417 [2024-11-25 12:06:41.901892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.417 [2024-11-25 12:06:41.903571] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:08:41.417 12:06:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.417 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 64259 00:08:41.417 12:06:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 64259 00:08:41.417 12:06:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 64259 00:08:41.417 12:06:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:08:41.417 12:06:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:08:41.417 12:06:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:08:41.417 12:06:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.417 12:06:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:41.417 12:06:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.417 12:06:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:08:41.417 12:06:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_AW6SJ.txt 00:08:41.417 12:06:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:08:41.417 12:06:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:08:41.417 12:06:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:41.417 12:06:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:41.417 12:06:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:41.417 12:06:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:41.417 12:06:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:41.417 12:06:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:41.417 12:06:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:08:41.417 12:06:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:08:41.417 12:06:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:08:41.417 12:06:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:41.417 12:06:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:41.417 12:06:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:41.417 12:06:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:41.417 12:06:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:41.417 12:06:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:41.417 12:06:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:08:41.417 12:06:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:08:41.417 12:06:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_AW6SJ.txt 00:08:41.417 12:06:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 64236 00:08:41.417 12:06:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 64236 ']' 00:08:41.417 12:06:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 64236 00:08:41.417 12:06:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:08:41.417 12:06:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:41.417 12:06:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64236 00:08:41.417 12:06:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:41.417 killing process with pid 64236 00:08:41.417 12:06:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:41.417 12:06:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64236' 00:08:41.417 12:06:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 64236 00:08:41.417 12:06:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 64236 00:08:42.800 12:06:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:08:42.800 12:06:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:08:42.800 00:08:42.800 real 0m4.888s 00:08:42.800 user 0m17.353s 00:08:42.800 sys 0m0.494s 00:08:42.800 ************************************ 00:08:42.800 END TEST bdev_nvme_reset_stuck_adm_cmd 00:08:42.800 ************************************ 00:08:42.800 12:06:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:42.800 12:06:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:42.800 12:06:43 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:08:42.800 12:06:43 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:08:42.800 12:06:43 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:42.800 12:06:43 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:42.800 12:06:43 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:42.800 ************************************ 00:08:42.800 START TEST nvme_fio 00:08:42.800 ************************************ 00:08:42.800 12:06:43 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:08:42.800 12:06:43 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:08:42.800 12:06:43 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:08:42.800 12:06:43 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:08:42.800 12:06:43 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:42.800 12:06:43 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:08:42.800 12:06:43 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:42.800 12:06:43 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:42.800 12:06:43 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:42.800 12:06:43 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:42.800 12:06:43 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:42.800 12:06:43 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:08:42.800 12:06:43 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:08:42.800 12:06:43 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:42.800 12:06:43 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:42.800 12:06:43 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:43.162 12:06:43 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:43.162 12:06:43 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:43.162 12:06:44 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:43.162 12:06:44 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:43.162 12:06:44 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:43.162 12:06:44 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:43.162 12:06:44 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:43.162 12:06:44 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:43.162 12:06:44 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:43.162 12:06:44 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:43.162 12:06:44 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:43.162 12:06:44 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:43.162 12:06:44 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:43.162 12:06:44 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:43.162 12:06:44 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:43.162 12:06:44 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:43.162 12:06:44 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:43.163 12:06:44 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:43.163 12:06:44 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:43.163 12:06:44 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:43.424 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:43.424 fio-3.35 00:08:43.424 Starting 1 thread 00:08:47.619 00:08:47.619 test: (groupid=0, jobs=1): err= 0: pid=64404: Mon Nov 25 12:06:48 2024 00:08:47.619 read: IOPS=16.2k, BW=63.4MiB/s (66.4MB/s)(127MiB/2001msec) 00:08:47.619 slat (nsec): min=4218, max=70391, avg=6254.01, stdev=3339.25 00:08:47.619 clat (usec): min=826, max=12253, avg=3914.28, stdev=1411.38 00:08:47.619 lat (usec): min=831, max=12278, avg=3920.53, stdev=1412.67 00:08:47.619 clat percentiles (usec): 00:08:47.619 | 1.00th=[ 2147], 5.00th=[ 2409], 10.00th=[ 2573], 20.00th=[ 2769], 00:08:47.619 | 30.00th=[ 2966], 40.00th=[ 3163], 50.00th=[ 3392], 60.00th=[ 3785], 00:08:47.619 | 70.00th=[ 4359], 80.00th=[ 5145], 90.00th=[ 6063], 95.00th=[ 6587], 00:08:47.619 | 99.00th=[ 8160], 99.50th=[ 9110], 99.90th=[10028], 99.95th=[10552], 00:08:47.619 | 99.99th=[11469] 00:08:47.619 bw ( KiB/s): min=63568, max=64280, per=98.63%, avg=63986.67, stdev=372.18, samples=3 00:08:47.619 iops : min=15892, max=16070, avg=15996.67, stdev=93.04, samples=3 00:08:47.619 write: IOPS=16.3k, BW=63.5MiB/s (66.6MB/s)(127MiB/2001msec); 0 zone resets 00:08:47.619 slat (nsec): min=4297, max=86931, avg=6418.69, stdev=3431.11 00:08:47.619 clat (usec): min=816, max=11694, avg=3944.54, stdev=1404.89 00:08:47.619 lat (usec): min=822, max=11705, avg=3950.96, stdev=1406.18 00:08:47.619 clat percentiles (usec): 00:08:47.619 | 1.00th=[ 2212], 5.00th=[ 2442], 10.00th=[ 2573], 20.00th=[ 2802], 00:08:47.619 | 30.00th=[ 2999], 40.00th=[ 3195], 50.00th=[ 3425], 60.00th=[ 3818], 00:08:47.619 | 70.00th=[ 4424], 80.00th=[ 5211], 90.00th=[ 6128], 95.00th=[ 6587], 00:08:47.619 | 99.00th=[ 8094], 99.50th=[ 8848], 99.90th=[ 9896], 99.95th=[10421], 00:08:47.619 | 99.99th=[10814] 00:08:47.619 bw ( KiB/s): min=62928, max=64408, per=97.95%, avg=63696.00, stdev=741.59, samples=3 00:08:47.619 iops : min=15732, max=16102, avg=15924.00, stdev=185.40, samples=3 00:08:47.619 lat (usec) : 1000=0.03% 00:08:47.619 lat (msec) : 2=0.38%, 4=63.43%, 10=36.07%, 20=0.10% 00:08:47.619 cpu : usr=98.55%, sys=0.10%, ctx=4, majf=0, minf=607 00:08:47.619 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:47.619 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:47.619 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:47.619 issued rwts: total=32452,32532,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:47.619 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:47.619 00:08:47.619 Run status group 0 (all jobs): 00:08:47.619 READ: bw=63.4MiB/s (66.4MB/s), 63.4MiB/s-63.4MiB/s (66.4MB/s-66.4MB/s), io=127MiB (133MB), run=2001-2001msec 00:08:47.619 WRITE: bw=63.5MiB/s (66.6MB/s), 63.5MiB/s-63.5MiB/s (66.6MB/s-66.6MB/s), io=127MiB (133MB), run=2001-2001msec 00:08:47.892 ----------------------------------------------------- 00:08:47.892 Suppressions used: 00:08:47.892 count bytes template 00:08:47.892 1 32 /usr/src/fio/parse.c 00:08:47.892 1 8 libtcmalloc_minimal.so 00:08:47.892 ----------------------------------------------------- 00:08:47.892 00:08:47.892 12:06:48 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:47.892 12:06:48 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:47.892 12:06:48 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:47.892 12:06:48 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:48.151 12:06:49 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:48.151 12:06:49 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:48.412 12:06:49 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:48.412 12:06:49 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:48.412 12:06:49 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:48.412 12:06:49 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:48.412 12:06:49 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:48.412 12:06:49 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:48.412 12:06:49 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:48.412 12:06:49 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:48.412 12:06:49 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:48.412 12:06:49 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:48.412 12:06:49 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:48.412 12:06:49 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:48.412 12:06:49 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:48.412 12:06:49 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:48.412 12:06:49 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:48.412 12:06:49 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:48.412 12:06:49 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:48.412 12:06:49 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:48.412 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:48.412 fio-3.35 00:08:48.412 Starting 1 thread 00:08:53.703 00:08:53.703 test: (groupid=0, jobs=1): err= 0: pid=64460: Mon Nov 25 12:06:54 2024 00:08:53.703 read: IOPS=19.5k, BW=76.0MiB/s (79.7MB/s)(152MiB/2001msec) 00:08:53.703 slat (usec): min=3, max=107, avg= 5.46, stdev= 3.00 00:08:53.703 clat (usec): min=270, max=10494, avg=3267.97, stdev=1289.58 00:08:53.703 lat (usec): min=275, max=10499, avg=3273.43, stdev=1290.95 00:08:53.703 clat percentiles (usec): 00:08:53.703 | 1.00th=[ 1860], 5.00th=[ 2212], 10.00th=[ 2311], 20.00th=[ 2409], 00:08:53.703 | 30.00th=[ 2474], 40.00th=[ 2606], 50.00th=[ 2737], 60.00th=[ 2933], 00:08:53.703 | 70.00th=[ 3261], 80.00th=[ 4146], 90.00th=[ 5276], 95.00th=[ 6128], 00:08:53.703 | 99.00th=[ 7635], 99.50th=[ 8291], 99.90th=[ 9372], 99.95th=[ 9765], 00:08:53.703 | 99.99th=[10290] 00:08:53.703 bw ( KiB/s): min=61248, max=86480, per=93.16%, avg=72546.67, stdev=12820.67, samples=3 00:08:53.703 iops : min=15312, max=21620, avg=18136.67, stdev=3205.17, samples=3 00:08:53.703 write: IOPS=19.4k, BW=75.9MiB/s (79.6MB/s)(152MiB/2001msec); 0 zone resets 00:08:53.703 slat (nsec): min=3491, max=88655, avg=5626.33, stdev=3006.83 00:08:53.703 clat (usec): min=230, max=10670, avg=3291.27, stdev=1297.65 00:08:53.703 lat (usec): min=235, max=10686, avg=3296.89, stdev=1298.98 00:08:53.703 clat percentiles (usec): 00:08:53.703 | 1.00th=[ 1860], 5.00th=[ 2212], 10.00th=[ 2311], 20.00th=[ 2409], 00:08:53.703 | 30.00th=[ 2474], 40.00th=[ 2606], 50.00th=[ 2769], 60.00th=[ 2933], 00:08:53.703 | 70.00th=[ 3294], 80.00th=[ 4178], 90.00th=[ 5342], 95.00th=[ 6194], 00:08:53.703 | 99.00th=[ 7570], 99.50th=[ 8225], 99.90th=[ 9241], 99.95th=[ 9634], 00:08:53.703 | 99.99th=[10159] 00:08:53.703 bw ( KiB/s): min=60920, max=86704, per=93.26%, avg=72485.33, stdev=13095.18, samples=3 00:08:53.703 iops : min=15230, max=21676, avg=18121.33, stdev=3273.80, samples=3 00:08:53.703 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.02% 00:08:53.703 lat (msec) : 2=1.80%, 4=76.76%, 10=21.36%, 20=0.02% 00:08:53.703 cpu : usr=98.30%, sys=0.25%, ctx=202, majf=0, minf=607 00:08:53.703 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:53.703 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:53.703 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:53.703 issued rwts: total=38954,38883,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:53.703 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:53.703 00:08:53.703 Run status group 0 (all jobs): 00:08:53.703 READ: bw=76.0MiB/s (79.7MB/s), 76.0MiB/s-76.0MiB/s (79.7MB/s-79.7MB/s), io=152MiB (160MB), run=2001-2001msec 00:08:53.703 WRITE: bw=75.9MiB/s (79.6MB/s), 75.9MiB/s-75.9MiB/s (79.6MB/s-79.6MB/s), io=152MiB (159MB), run=2001-2001msec 00:08:53.703 ----------------------------------------------------- 00:08:53.703 Suppressions used: 00:08:53.703 count bytes template 00:08:53.703 1 32 /usr/src/fio/parse.c 00:08:53.703 1 8 libtcmalloc_minimal.so 00:08:53.703 ----------------------------------------------------- 00:08:53.703 00:08:53.703 12:06:54 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:53.704 12:06:54 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:53.704 12:06:54 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:53.704 12:06:54 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:53.966 12:06:54 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:53.966 12:06:54 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:54.224 12:06:55 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:54.224 12:06:55 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:54.224 12:06:55 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:54.224 12:06:55 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:54.224 12:06:55 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:54.224 12:06:55 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:54.224 12:06:55 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:54.224 12:06:55 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:54.224 12:06:55 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:54.224 12:06:55 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:54.224 12:06:55 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:54.224 12:06:55 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:54.224 12:06:55 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:54.224 12:06:55 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:54.224 12:06:55 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:54.224 12:06:55 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:54.224 12:06:55 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:54.224 12:06:55 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:54.481 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:54.481 fio-3.35 00:08:54.481 Starting 1 thread 00:09:02.585 00:09:02.585 test: (groupid=0, jobs=1): err= 0: pid=64521: Mon Nov 25 12:07:03 2024 00:09:02.585 read: IOPS=23.3k, BW=91.2MiB/s (95.6MB/s)(182MiB/2001msec) 00:09:02.585 slat (nsec): min=4210, max=79427, avg=5117.31, stdev=2262.25 00:09:02.585 clat (usec): min=246, max=8250, avg=2733.01, stdev=794.69 00:09:02.585 lat (usec): min=251, max=8254, avg=2738.13, stdev=796.07 00:09:02.585 clat percentiles (usec): 00:09:02.585 | 1.00th=[ 1696], 5.00th=[ 2212], 10.00th=[ 2343], 20.00th=[ 2409], 00:09:02.585 | 30.00th=[ 2442], 40.00th=[ 2474], 50.00th=[ 2507], 60.00th=[ 2540], 00:09:02.585 | 70.00th=[ 2606], 80.00th=[ 2737], 90.00th=[ 3195], 95.00th=[ 4817], 00:09:02.585 | 99.00th=[ 6128], 99.50th=[ 6390], 99.90th=[ 6915], 99.95th=[ 7767], 00:09:02.585 | 99.99th=[ 8160] 00:09:02.585 bw ( KiB/s): min=84656, max=95064, per=97.25%, avg=90781.33, stdev=5443.18, samples=3 00:09:02.585 iops : min=21164, max=23766, avg=22695.33, stdev=1360.79, samples=3 00:09:02.585 write: IOPS=23.2k, BW=90.6MiB/s (95.0MB/s)(181MiB/2001msec); 0 zone resets 00:09:02.585 slat (nsec): min=4303, max=59722, avg=5357.25, stdev=2247.43 00:09:02.585 clat (usec): min=216, max=8225, avg=2747.30, stdev=812.10 00:09:02.585 lat (usec): min=221, max=8230, avg=2752.66, stdev=813.48 00:09:02.585 clat percentiles (usec): 00:09:02.585 | 1.00th=[ 1713], 5.00th=[ 2212], 10.00th=[ 2343], 20.00th=[ 2409], 00:09:02.585 | 30.00th=[ 2442], 40.00th=[ 2474], 50.00th=[ 2507], 60.00th=[ 2540], 00:09:02.585 | 70.00th=[ 2606], 80.00th=[ 2769], 90.00th=[ 3261], 95.00th=[ 4883], 00:09:02.585 | 99.00th=[ 6194], 99.50th=[ 6456], 99.90th=[ 7177], 99.95th=[ 7635], 00:09:02.585 | 99.99th=[ 8094] 00:09:02.585 bw ( KiB/s): min=84520, max=94400, per=98.09%, avg=90965.33, stdev=5585.85, samples=3 00:09:02.585 iops : min=21130, max=23600, avg=22741.33, stdev=1396.46, samples=3 00:09:02.585 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.03% 00:09:02.585 lat (msec) : 2=2.75%, 4=90.11%, 10=7.08% 00:09:02.585 cpu : usr=99.20%, sys=0.05%, ctx=2, majf=0, minf=607 00:09:02.585 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:02.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:02.585 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:02.585 issued rwts: total=46699,46390,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:02.585 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:02.585 00:09:02.585 Run status group 0 (all jobs): 00:09:02.585 READ: bw=91.2MiB/s (95.6MB/s), 91.2MiB/s-91.2MiB/s (95.6MB/s-95.6MB/s), io=182MiB (191MB), run=2001-2001msec 00:09:02.585 WRITE: bw=90.6MiB/s (95.0MB/s), 90.6MiB/s-90.6MiB/s (95.0MB/s-95.0MB/s), io=181MiB (190MB), run=2001-2001msec 00:09:02.585 ----------------------------------------------------- 00:09:02.585 Suppressions used: 00:09:02.585 count bytes template 00:09:02.585 1 32 /usr/src/fio/parse.c 00:09:02.585 1 8 libtcmalloc_minimal.so 00:09:02.585 ----------------------------------------------------- 00:09:02.585 00:09:02.585 12:07:03 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:02.585 12:07:03 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:02.585 12:07:03 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:02.585 12:07:03 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:02.843 12:07:03 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:02.843 12:07:03 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:02.844 12:07:03 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:02.844 12:07:03 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:02.844 12:07:03 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:02.844 12:07:03 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:02.844 12:07:03 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:02.844 12:07:03 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:02.844 12:07:03 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:02.844 12:07:03 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:02.844 12:07:03 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:02.844 12:07:03 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:02.844 12:07:03 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:02.844 12:07:03 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:02.844 12:07:03 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:03.101 12:07:03 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:03.101 12:07:03 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:03.101 12:07:03 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:03.101 12:07:03 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:03.101 12:07:03 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:03.101 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:03.101 fio-3.35 00:09:03.101 Starting 1 thread 00:09:13.082 00:09:13.082 test: (groupid=0, jobs=1): err= 0: pid=64582: Mon Nov 25 12:07:13 2024 00:09:13.082 read: IOPS=22.5k, BW=87.8MiB/s (92.0MB/s)(176MiB/2001msec) 00:09:13.082 slat (nsec): min=3348, max=79208, avg=5024.35, stdev=2176.26 00:09:13.082 clat (usec): min=214, max=74605, avg=2800.31, stdev=2063.03 00:09:13.082 lat (usec): min=218, max=74609, avg=2805.34, stdev=2063.49 00:09:13.082 clat percentiles (usec): 00:09:13.082 | 1.00th=[ 1876], 5.00th=[ 2245], 10.00th=[ 2376], 20.00th=[ 2442], 00:09:13.082 | 30.00th=[ 2474], 40.00th=[ 2507], 50.00th=[ 2540], 60.00th=[ 2573], 00:09:13.082 | 70.00th=[ 2606], 80.00th=[ 2769], 90.00th=[ 3130], 95.00th=[ 4555], 00:09:13.082 | 99.00th=[ 6390], 99.50th=[ 6849], 99.90th=[ 8160], 99.95th=[69731], 00:09:13.082 | 99.99th=[71828] 00:09:13.082 bw ( KiB/s): min=82392, max=94568, per=99.20%, avg=89152.00, stdev=6198.27, samples=3 00:09:13.082 iops : min=20598, max=23642, avg=22288.00, stdev=1549.57, samples=3 00:09:13.082 write: IOPS=22.3k, BW=87.2MiB/s (91.5MB/s)(175MiB/2001msec); 0 zone resets 00:09:13.082 slat (nsec): min=3457, max=84957, avg=5267.97, stdev=2201.75 00:09:13.082 clat (usec): min=246, max=75350, avg=2894.29, stdev=3287.00 00:09:13.082 lat (usec): min=251, max=75354, avg=2899.56, stdev=3287.28 00:09:13.082 clat percentiles (usec): 00:09:13.082 | 1.00th=[ 1844], 5.00th=[ 2278], 10.00th=[ 2376], 20.00th=[ 2442], 00:09:13.082 | 30.00th=[ 2474], 40.00th=[ 2507], 50.00th=[ 2540], 60.00th=[ 2573], 00:09:13.082 | 70.00th=[ 2606], 80.00th=[ 2769], 90.00th=[ 3163], 95.00th=[ 4621], 00:09:13.082 | 99.00th=[ 6456], 99.50th=[ 6915], 99.90th=[74974], 99.95th=[74974], 00:09:13.082 | 99.99th=[74974] 00:09:13.082 bw ( KiB/s): min=82280, max=95520, per=100.00%, avg=89346.67, stdev=6665.05, samples=3 00:09:13.082 iops : min=20570, max=23880, avg=22336.67, stdev=1666.26, samples=3 00:09:13.082 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.04% 00:09:13.082 lat (msec) : 2=1.53%, 4=91.88%, 10=6.37%, 100=0.14% 00:09:13.082 cpu : usr=99.20%, sys=0.05%, ctx=4, majf=0, minf=605 00:09:13.082 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:13.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:13.082 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:13.082 issued rwts: total=44956,44690,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:13.082 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:13.082 00:09:13.082 Run status group 0 (all jobs): 00:09:13.082 READ: bw=87.8MiB/s (92.0MB/s), 87.8MiB/s-87.8MiB/s (92.0MB/s-92.0MB/s), io=176MiB (184MB), run=2001-2001msec 00:09:13.082 WRITE: bw=87.2MiB/s (91.5MB/s), 87.2MiB/s-87.2MiB/s (91.5MB/s-91.5MB/s), io=175MiB (183MB), run=2001-2001msec 00:09:13.082 ----------------------------------------------------- 00:09:13.082 Suppressions used: 00:09:13.082 count bytes template 00:09:13.082 1 32 /usr/src/fio/parse.c 00:09:13.082 1 8 libtcmalloc_minimal.so 00:09:13.082 ----------------------------------------------------- 00:09:13.082 00:09:13.082 12:07:13 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:13.082 12:07:13 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:09:13.082 00:09:13.082 real 0m29.815s 00:09:13.082 user 0m22.291s 00:09:13.082 sys 0m11.483s 00:09:13.082 12:07:13 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:13.082 ************************************ 00:09:13.082 END TEST nvme_fio 00:09:13.082 ************************************ 00:09:13.082 12:07:13 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:09:13.082 00:09:13.082 real 1m39.266s 00:09:13.082 user 3m43.961s 00:09:13.082 sys 0m22.014s 00:09:13.082 12:07:13 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:13.082 12:07:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:13.082 ************************************ 00:09:13.082 END TEST nvme 00:09:13.082 ************************************ 00:09:13.082 12:07:13 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:09:13.082 12:07:13 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:13.082 12:07:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:13.082 12:07:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:13.082 12:07:13 -- common/autotest_common.sh@10 -- # set +x 00:09:13.082 ************************************ 00:09:13.082 START TEST nvme_scc 00:09:13.082 ************************************ 00:09:13.082 12:07:13 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:13.082 * Looking for test storage... 00:09:13.082 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:13.082 12:07:13 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:13.082 12:07:13 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:13.082 12:07:13 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:13.082 12:07:13 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:13.082 12:07:13 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:13.082 12:07:13 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:13.082 12:07:13 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:13.082 12:07:13 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:09:13.082 12:07:13 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:09:13.082 12:07:13 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:09:13.082 12:07:13 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:09:13.082 12:07:13 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:09:13.082 12:07:13 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:09:13.082 12:07:13 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:09:13.082 12:07:13 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:13.082 12:07:13 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:09:13.082 12:07:13 nvme_scc -- scripts/common.sh@345 -- # : 1 00:09:13.082 12:07:13 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:13.082 12:07:13 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:13.082 12:07:13 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:09:13.082 12:07:13 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:09:13.082 12:07:13 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:13.082 12:07:13 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:09:13.082 12:07:13 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:13.082 12:07:13 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:09:13.082 12:07:13 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:09:13.082 12:07:13 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:13.082 12:07:13 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:09:13.082 12:07:13 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:13.082 12:07:13 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:13.082 12:07:13 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:13.082 12:07:13 nvme_scc -- scripts/common.sh@368 -- # return 0 00:09:13.082 12:07:13 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:13.082 12:07:13 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:13.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.082 --rc genhtml_branch_coverage=1 00:09:13.082 --rc genhtml_function_coverage=1 00:09:13.082 --rc genhtml_legend=1 00:09:13.082 --rc geninfo_all_blocks=1 00:09:13.082 --rc geninfo_unexecuted_blocks=1 00:09:13.082 00:09:13.082 ' 00:09:13.082 12:07:13 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:13.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.082 --rc genhtml_branch_coverage=1 00:09:13.082 --rc genhtml_function_coverage=1 00:09:13.082 --rc genhtml_legend=1 00:09:13.082 --rc geninfo_all_blocks=1 00:09:13.082 --rc geninfo_unexecuted_blocks=1 00:09:13.082 00:09:13.082 ' 00:09:13.082 12:07:13 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:13.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.083 --rc genhtml_branch_coverage=1 00:09:13.083 --rc genhtml_function_coverage=1 00:09:13.083 --rc genhtml_legend=1 00:09:13.083 --rc geninfo_all_blocks=1 00:09:13.083 --rc geninfo_unexecuted_blocks=1 00:09:13.083 00:09:13.083 ' 00:09:13.083 12:07:13 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:13.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.083 --rc genhtml_branch_coverage=1 00:09:13.083 --rc genhtml_function_coverage=1 00:09:13.083 --rc genhtml_legend=1 00:09:13.083 --rc geninfo_all_blocks=1 00:09:13.083 --rc geninfo_unexecuted_blocks=1 00:09:13.083 00:09:13.083 ' 00:09:13.083 12:07:13 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:13.083 12:07:13 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:13.083 12:07:13 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:13.083 12:07:13 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:13.083 12:07:13 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:13.083 12:07:13 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:13.083 12:07:13 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:13.083 12:07:13 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:13.083 12:07:13 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:13.083 12:07:13 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.083 12:07:13 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.083 12:07:13 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.083 12:07:13 nvme_scc -- paths/export.sh@5 -- # export PATH 00:09:13.083 12:07:13 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.083 12:07:13 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:09:13.083 12:07:13 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:13.083 12:07:13 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:09:13.083 12:07:13 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:13.083 12:07:13 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:09:13.083 12:07:13 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:13.083 12:07:13 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:13.083 12:07:13 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:13.083 12:07:13 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:09:13.083 12:07:13 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:13.083 12:07:13 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:09:13.083 12:07:13 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:09:13.083 12:07:13 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:09:13.083 12:07:13 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:13.083 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:13.083 Waiting for block devices as requested 00:09:13.083 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:13.340 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:13.340 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:13.340 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:18.610 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:18.610 12:07:19 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:18.610 12:07:19 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:18.610 12:07:19 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:18.610 12:07:19 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:18.610 12:07:19 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:18.610 12:07:19 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:18.610 12:07:19 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:18.610 12:07:19 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:18.610 12:07:19 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:18.610 12:07:19 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:18.610 12:07:19 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:18.610 12:07:19 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:18.610 12:07:19 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:18.610 12:07:19 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:18.610 12:07:19 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:18.610 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.610 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.610 12:07:19 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:18.610 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:18.610 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.610 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.610 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:18.610 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:18.610 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:18.610 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.610 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.610 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:18.610 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:18.610 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:18.610 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.610 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.610 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:18.610 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:18.610 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:18.610 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.610 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.610 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:18.610 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:18.610 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:18.610 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.610 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.610 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:18.610 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:18.610 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:18.610 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.610 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.610 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:18.610 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:18.610 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:18.610 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.610 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.610 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:18.610 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:18.610 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:18.610 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.610 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.610 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.610 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:18.610 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:18.610 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.610 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.610 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:18.610 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:18.610 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:18.610 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.610 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.610 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.610 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.611 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:18.612 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:09:18.613 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.614 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.615 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:18.616 12:07:19 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:18.616 12:07:19 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:18.616 12:07:19 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:18.616 12:07:19 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:18.616 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.617 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:18.618 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:18.619 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:09:18.620 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.621 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:18.622 12:07:19 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:18.622 12:07:19 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:18.622 12:07:19 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:18.622 12:07:19 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.622 12:07:19 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.623 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:18.624 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:09:18.625 12:07:19 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:09:18.626 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.627 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:18.628 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.629 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:09:18.630 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:09:18.630 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.630 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.630 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.630 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:09:18.630 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:09:18.630 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.630 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.630 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:18.630 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:09:18.630 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:09:18.630 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.630 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.630 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:18.630 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:09:18.630 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:09:18.630 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.630 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.630 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:18.893 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:09:18.893 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:09:18.893 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.893 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.893 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.893 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:09:18.893 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:09:18.893 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.893 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.893 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.893 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:09:18.893 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:09:18.893 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.893 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.893 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.893 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:09:18.893 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:09:18.893 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.893 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.893 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.893 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:09:18.893 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:09:18.893 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.893 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.893 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.893 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:09:18.893 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:09:18.893 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.893 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.893 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:18.893 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:09:18.893 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:09:18.893 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.893 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.893 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:18.893 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:09:18.893 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:09:18.893 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.893 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.893 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:18.893 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:18.893 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:18.893 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.893 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.893 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:18.893 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:18.893 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:18.893 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.893 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.893 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:18.893 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:18.893 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:18.893 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.893 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.893 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:18.893 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:18.893 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:18.893 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.893 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.893 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:18.893 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:18.893 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:18.893 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.893 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.893 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.894 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.895 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.896 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.897 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.898 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:18.899 12:07:19 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:18.899 12:07:19 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:18.899 12:07:19 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:18.899 12:07:19 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:18.899 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.900 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.901 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:18.902 12:07:19 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:09:18.902 12:07:19 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:09:18.902 12:07:19 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:18.902 12:07:19 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:18.902 12:07:19 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:19.163 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:19.728 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:19.728 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:19.728 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:19.728 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:19.987 12:07:20 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:19.987 12:07:20 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:19.987 12:07:20 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:19.987 12:07:20 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:19.987 ************************************ 00:09:19.987 START TEST nvme_simple_copy 00:09:19.987 ************************************ 00:09:19.987 12:07:20 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:20.246 Initializing NVMe Controllers 00:09:20.246 Attaching to 0000:00:10.0 00:09:20.246 Controller supports SCC. Attached to 0000:00:10.0 00:09:20.246 Namespace ID: 1 size: 6GB 00:09:20.246 Initialization complete. 00:09:20.246 00:09:20.246 Controller QEMU NVMe Ctrl (12340 ) 00:09:20.246 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:09:20.246 Namespace Block Size:4096 00:09:20.246 Writing LBAs 0 to 63 with Random Data 00:09:20.246 Copied LBAs from 0 - 63 to the Destination LBA 256 00:09:20.246 LBAs matching Written Data: 64 00:09:20.246 ************************************ 00:09:20.246 END TEST nvme_simple_copy 00:09:20.246 ************************************ 00:09:20.246 00:09:20.246 real 0m0.268s 00:09:20.246 user 0m0.094s 00:09:20.246 sys 0m0.072s 00:09:20.246 12:07:21 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:20.246 12:07:21 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:09:20.246 00:09:20.246 real 0m7.617s 00:09:20.246 user 0m1.081s 00:09:20.246 sys 0m1.283s 00:09:20.246 12:07:21 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:20.246 ************************************ 00:09:20.246 END TEST nvme_scc 00:09:20.246 ************************************ 00:09:20.246 12:07:21 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:20.246 12:07:21 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:09:20.246 12:07:21 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:09:20.246 12:07:21 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:09:20.246 12:07:21 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:09:20.246 12:07:21 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:09:20.246 12:07:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:20.246 12:07:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:20.246 12:07:21 -- common/autotest_common.sh@10 -- # set +x 00:09:20.246 ************************************ 00:09:20.246 START TEST nvme_fdp 00:09:20.246 ************************************ 00:09:20.246 12:07:21 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:09:20.246 * Looking for test storage... 00:09:20.246 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:20.246 12:07:21 nvme_fdp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:20.246 12:07:21 nvme_fdp -- common/autotest_common.sh@1693 -- # lcov --version 00:09:20.246 12:07:21 nvme_fdp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:20.246 12:07:21 nvme_fdp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:20.246 12:07:21 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:20.246 12:07:21 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:20.246 12:07:21 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:20.246 12:07:21 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:09:20.246 12:07:21 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:09:20.246 12:07:21 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:09:20.246 12:07:21 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:09:20.246 12:07:21 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:09:20.246 12:07:21 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:09:20.246 12:07:21 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:09:20.246 12:07:21 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:20.246 12:07:21 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:09:20.246 12:07:21 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:09:20.246 12:07:21 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:20.246 12:07:21 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:20.246 12:07:21 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:09:20.246 12:07:21 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:09:20.246 12:07:21 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:20.246 12:07:21 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:09:20.246 12:07:21 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:20.246 12:07:21 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:09:20.246 12:07:21 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:09:20.246 12:07:21 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:20.246 12:07:21 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:09:20.246 12:07:21 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:20.246 12:07:21 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:20.246 12:07:21 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:20.246 12:07:21 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:09:20.246 12:07:21 nvme_fdp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:20.246 12:07:21 nvme_fdp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:20.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.246 --rc genhtml_branch_coverage=1 00:09:20.246 --rc genhtml_function_coverage=1 00:09:20.246 --rc genhtml_legend=1 00:09:20.246 --rc geninfo_all_blocks=1 00:09:20.246 --rc geninfo_unexecuted_blocks=1 00:09:20.246 00:09:20.246 ' 00:09:20.246 12:07:21 nvme_fdp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:20.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.247 --rc genhtml_branch_coverage=1 00:09:20.247 --rc genhtml_function_coverage=1 00:09:20.247 --rc genhtml_legend=1 00:09:20.247 --rc geninfo_all_blocks=1 00:09:20.247 --rc geninfo_unexecuted_blocks=1 00:09:20.247 00:09:20.247 ' 00:09:20.247 12:07:21 nvme_fdp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:20.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.247 --rc genhtml_branch_coverage=1 00:09:20.247 --rc genhtml_function_coverage=1 00:09:20.247 --rc genhtml_legend=1 00:09:20.247 --rc geninfo_all_blocks=1 00:09:20.247 --rc geninfo_unexecuted_blocks=1 00:09:20.247 00:09:20.247 ' 00:09:20.247 12:07:21 nvme_fdp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:20.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.247 --rc genhtml_branch_coverage=1 00:09:20.247 --rc genhtml_function_coverage=1 00:09:20.247 --rc genhtml_legend=1 00:09:20.247 --rc geninfo_all_blocks=1 00:09:20.247 --rc geninfo_unexecuted_blocks=1 00:09:20.247 00:09:20.247 ' 00:09:20.247 12:07:21 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:20.247 12:07:21 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:20.247 12:07:21 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:20.247 12:07:21 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:20.247 12:07:21 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:20.247 12:07:21 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:09:20.247 12:07:21 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:20.247 12:07:21 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:20.247 12:07:21 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:20.247 12:07:21 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.247 12:07:21 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.247 12:07:21 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.247 12:07:21 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:09:20.247 12:07:21 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.247 12:07:21 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:09:20.247 12:07:21 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:20.247 12:07:21 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:09:20.247 12:07:21 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:20.247 12:07:21 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:09:20.247 12:07:21 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:20.247 12:07:21 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:20.247 12:07:21 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:20.247 12:07:21 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:09:20.247 12:07:21 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:20.247 12:07:21 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:20.854 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:20.854 Waiting for block devices as requested 00:09:20.854 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:20.854 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:20.854 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:21.146 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:26.505 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:26.505 12:07:27 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:26.505 12:07:27 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:26.505 12:07:27 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:26.505 12:07:27 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:26.505 12:07:27 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.505 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.506 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.507 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.508 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:26.509 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:26.510 12:07:27 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:26.510 12:07:27 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:26.510 12:07:27 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:26.510 12:07:27 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.510 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.511 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:26.512 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.513 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:26.514 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:26.515 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:26.516 12:07:27 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:26.516 12:07:27 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:26.516 12:07:27 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:26.516 12:07:27 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.516 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.517 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.518 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:26.519 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.520 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.521 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:26.522 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:26.523 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.524 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.525 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:26.526 12:07:27 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:26.526 12:07:27 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:26.526 12:07:27 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:26.527 12:07:27 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:26.527 12:07:27 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:26.527 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.528 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:26.529 12:07:27 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:26.529 12:07:27 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:26.530 12:07:27 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:26.530 12:07:27 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:09:26.530 12:07:27 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:09:26.530 12:07:27 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:09:26.530 12:07:27 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:09:26.530 12:07:27 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:09:26.530 12:07:27 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:26.788 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:27.353 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:27.353 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:27.353 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:27.353 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:27.353 12:07:28 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:27.353 12:07:28 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:27.353 12:07:28 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:27.353 12:07:28 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:27.353 ************************************ 00:09:27.353 START TEST nvme_flexible_data_placement 00:09:27.353 ************************************ 00:09:27.353 12:07:28 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:27.611 Initializing NVMe Controllers 00:09:27.611 Attaching to 0000:00:13.0 00:09:27.611 Controller supports FDP Attached to 0000:00:13.0 00:09:27.611 Namespace ID: 1 Endurance Group ID: 1 00:09:27.611 Initialization complete. 00:09:27.611 00:09:27.611 ================================== 00:09:27.611 == FDP tests for Namespace: #01 == 00:09:27.611 ================================== 00:09:27.611 00:09:27.611 Get Feature: FDP: 00:09:27.611 ================= 00:09:27.611 Enabled: Yes 00:09:27.611 FDP configuration Index: 0 00:09:27.611 00:09:27.611 FDP configurations log page 00:09:27.611 =========================== 00:09:27.611 Number of FDP configurations: 1 00:09:27.611 Version: 0 00:09:27.611 Size: 112 00:09:27.611 FDP Configuration Descriptor: 0 00:09:27.611 Descriptor Size: 96 00:09:27.611 Reclaim Group Identifier format: 2 00:09:27.611 FDP Volatile Write Cache: Not Present 00:09:27.611 FDP Configuration: Valid 00:09:27.611 Vendor Specific Size: 0 00:09:27.611 Number of Reclaim Groups: 2 00:09:27.611 Number of Recalim Unit Handles: 8 00:09:27.611 Max Placement Identifiers: 128 00:09:27.611 Number of Namespaces Suppprted: 256 00:09:27.611 Reclaim unit Nominal Size: 6000000 bytes 00:09:27.611 Estimated Reclaim Unit Time Limit: Not Reported 00:09:27.611 RUH Desc #000: RUH Type: Initially Isolated 00:09:27.611 RUH Desc #001: RUH Type: Initially Isolated 00:09:27.611 RUH Desc #002: RUH Type: Initially Isolated 00:09:27.611 RUH Desc #003: RUH Type: Initially Isolated 00:09:27.611 RUH Desc #004: RUH Type: Initially Isolated 00:09:27.611 RUH Desc #005: RUH Type: Initially Isolated 00:09:27.611 RUH Desc #006: RUH Type: Initially Isolated 00:09:27.611 RUH Desc #007: RUH Type: Initially Isolated 00:09:27.611 00:09:27.611 FDP reclaim unit handle usage log page 00:09:27.611 ====================================== 00:09:27.611 Number of Reclaim Unit Handles: 8 00:09:27.611 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:27.611 RUH Usage Desc #001: RUH Attributes: Unused 00:09:27.611 RUH Usage Desc #002: RUH Attributes: Unused 00:09:27.611 RUH Usage Desc #003: RUH Attributes: Unused 00:09:27.611 RUH Usage Desc #004: RUH Attributes: Unused 00:09:27.611 RUH Usage Desc #005: RUH Attributes: Unused 00:09:27.611 RUH Usage Desc #006: RUH Attributes: Unused 00:09:27.611 RUH Usage Desc #007: RUH Attributes: Unused 00:09:27.611 00:09:27.611 FDP statistics log page 00:09:27.611 ======================= 00:09:27.611 Host bytes with metadata written: 954191872 00:09:27.611 Media bytes with metadata written: 954425344 00:09:27.611 Media bytes erased: 0 00:09:27.611 00:09:27.611 FDP Reclaim unit handle status 00:09:27.611 ============================== 00:09:27.611 Number of RUHS descriptors: 2 00:09:27.611 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000003203 00:09:27.611 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:09:27.611 00:09:27.611 FDP write on placement id: 0 success 00:09:27.611 00:09:27.611 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:09:27.611 00:09:27.611 IO mgmt send: RUH update for Placement ID: #0 Success 00:09:27.611 00:09:27.611 Get Feature: FDP Events for Placement handle: #0 00:09:27.611 ======================== 00:09:27.611 Number of FDP Events: 6 00:09:27.611 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:09:27.611 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:09:27.611 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:09:27.611 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:09:27.611 FDP Event: #4 Type: Media Reallocated Enabled: No 00:09:27.611 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:09:27.611 00:09:27.611 FDP events log page 00:09:27.611 =================== 00:09:27.611 Number of FDP events: 1 00:09:27.611 FDP Event #0: 00:09:27.611 Event Type: RU Not Written to Capacity 00:09:27.611 Placement Identifier: Valid 00:09:27.611 NSID: Valid 00:09:27.611 Location: Valid 00:09:27.611 Placement Identifier: 0 00:09:27.611 Event Timestamp: 6 00:09:27.611 Namespace Identifier: 1 00:09:27.611 Reclaim Group Identifier: 0 00:09:27.611 Reclaim Unit Handle Identifier: 0 00:09:27.611 00:09:27.611 FDP test passed 00:09:27.611 00:09:27.611 real 0m0.242s 00:09:27.611 user 0m0.079s 00:09:27.611 sys 0m0.061s 00:09:27.611 12:07:28 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:27.611 12:07:28 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:09:27.611 ************************************ 00:09:27.611 END TEST nvme_flexible_data_placement 00:09:27.611 ************************************ 00:09:27.611 00:09:27.611 real 0m7.492s 00:09:27.611 user 0m1.061s 00:09:27.611 sys 0m1.371s 00:09:27.611 12:07:28 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:27.611 12:07:28 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:27.611 ************************************ 00:09:27.611 END TEST nvme_fdp 00:09:27.611 ************************************ 00:09:27.611 12:07:28 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:09:27.611 12:07:28 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:27.611 12:07:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:27.611 12:07:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:27.611 12:07:28 -- common/autotest_common.sh@10 -- # set +x 00:09:27.611 ************************************ 00:09:27.611 START TEST nvme_rpc 00:09:27.611 ************************************ 00:09:27.611 12:07:28 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:27.871 * Looking for test storage... 00:09:27.871 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:27.871 12:07:28 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:27.871 12:07:28 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:27.871 12:07:28 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:27.871 12:07:28 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:27.871 12:07:28 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:27.871 12:07:28 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:27.871 12:07:28 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:27.871 12:07:28 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:27.871 12:07:28 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:27.871 12:07:28 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:27.871 12:07:28 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:27.871 12:07:28 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:27.871 12:07:28 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:27.871 12:07:28 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:27.871 12:07:28 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:27.871 12:07:28 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:27.871 12:07:28 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:09:27.871 12:07:28 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:27.871 12:07:28 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:27.871 12:07:28 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:27.871 12:07:28 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:09:27.871 12:07:28 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:27.871 12:07:28 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:09:27.871 12:07:28 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:27.871 12:07:28 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:27.871 12:07:28 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:09:27.871 12:07:28 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:27.871 12:07:28 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:09:27.871 12:07:28 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:27.871 12:07:28 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:27.871 12:07:28 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:27.871 12:07:28 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:09:27.871 12:07:28 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:27.871 12:07:28 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:27.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.871 --rc genhtml_branch_coverage=1 00:09:27.871 --rc genhtml_function_coverage=1 00:09:27.871 --rc genhtml_legend=1 00:09:27.871 --rc geninfo_all_blocks=1 00:09:27.871 --rc geninfo_unexecuted_blocks=1 00:09:27.871 00:09:27.871 ' 00:09:27.871 12:07:28 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:27.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.871 --rc genhtml_branch_coverage=1 00:09:27.871 --rc genhtml_function_coverage=1 00:09:27.871 --rc genhtml_legend=1 00:09:27.871 --rc geninfo_all_blocks=1 00:09:27.871 --rc geninfo_unexecuted_blocks=1 00:09:27.871 00:09:27.871 ' 00:09:27.871 12:07:28 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:27.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.871 --rc genhtml_branch_coverage=1 00:09:27.871 --rc genhtml_function_coverage=1 00:09:27.871 --rc genhtml_legend=1 00:09:27.872 --rc geninfo_all_blocks=1 00:09:27.872 --rc geninfo_unexecuted_blocks=1 00:09:27.872 00:09:27.872 ' 00:09:27.872 12:07:28 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:27.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.872 --rc genhtml_branch_coverage=1 00:09:27.872 --rc genhtml_function_coverage=1 00:09:27.872 --rc genhtml_legend=1 00:09:27.872 --rc geninfo_all_blocks=1 00:09:27.872 --rc geninfo_unexecuted_blocks=1 00:09:27.872 00:09:27.872 ' 00:09:27.872 12:07:28 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:27.872 12:07:28 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:09:27.872 12:07:28 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:27.872 12:07:28 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:09:27.872 12:07:28 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:09:27.872 12:07:28 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:09:27.872 12:07:28 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:27.872 12:07:28 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:09:27.872 12:07:28 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:27.872 12:07:28 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:27.872 12:07:28 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:27.872 12:07:28 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:27.872 12:07:28 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:27.872 12:07:28 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:09:27.872 12:07:28 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:09:27.872 12:07:28 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=65961 00:09:27.872 12:07:28 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:27.872 12:07:28 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:09:27.872 12:07:28 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 65961 00:09:27.872 12:07:28 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 65961 ']' 00:09:27.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:27.872 12:07:28 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:27.872 12:07:28 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:27.872 12:07:28 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:27.872 12:07:28 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:27.872 12:07:28 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.142 [2024-11-25 12:07:28.979332] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:09:28.142 [2024-11-25 12:07:28.979828] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65961 ] 00:09:28.142 [2024-11-25 12:07:29.140934] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:28.400 [2024-11-25 12:07:29.242613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:28.400 [2024-11-25 12:07:29.242784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.966 12:07:29 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:28.966 12:07:29 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:28.966 12:07:29 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:09:29.224 Nvme0n1 00:09:29.224 12:07:30 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:09:29.224 12:07:30 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:09:29.224 request: 00:09:29.224 { 00:09:29.224 "bdev_name": "Nvme0n1", 00:09:29.224 "filename": "non_existing_file", 00:09:29.224 "method": "bdev_nvme_apply_firmware", 00:09:29.224 "req_id": 1 00:09:29.224 } 00:09:29.224 Got JSON-RPC error response 00:09:29.224 response: 00:09:29.224 { 00:09:29.224 "code": -32603, 00:09:29.224 "message": "open file failed." 00:09:29.224 } 00:09:29.483 12:07:30 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:09:29.483 12:07:30 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:09:29.483 12:07:30 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:09:29.483 12:07:30 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:29.483 12:07:30 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 65961 00:09:29.483 12:07:30 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 65961 ']' 00:09:29.483 12:07:30 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 65961 00:09:29.483 12:07:30 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:09:29.483 12:07:30 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:29.483 12:07:30 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65961 00:09:29.483 12:07:30 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:29.483 12:07:30 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:29.483 killing process with pid 65961 00:09:29.483 12:07:30 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65961' 00:09:29.483 12:07:30 nvme_rpc -- common/autotest_common.sh@973 -- # kill 65961 00:09:29.483 12:07:30 nvme_rpc -- common/autotest_common.sh@978 -- # wait 65961 00:09:31.389 00:09:31.389 real 0m3.285s 00:09:31.389 user 0m6.212s 00:09:31.389 sys 0m0.528s 00:09:31.389 12:07:31 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:31.389 12:07:31 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:31.389 ************************************ 00:09:31.389 END TEST nvme_rpc 00:09:31.389 ************************************ 00:09:31.389 12:07:32 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:31.389 12:07:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:31.389 12:07:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:31.389 12:07:32 -- common/autotest_common.sh@10 -- # set +x 00:09:31.389 ************************************ 00:09:31.389 START TEST nvme_rpc_timeouts 00:09:31.389 ************************************ 00:09:31.389 12:07:32 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:31.389 * Looking for test storage... 00:09:31.389 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:31.389 12:07:32 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:31.389 12:07:32 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version 00:09:31.389 12:07:32 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:31.389 12:07:32 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:31.389 12:07:32 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:31.389 12:07:32 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:31.389 12:07:32 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:31.389 12:07:32 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:09:31.389 12:07:32 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:09:31.389 12:07:32 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:09:31.389 12:07:32 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:09:31.389 12:07:32 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:09:31.389 12:07:32 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:09:31.389 12:07:32 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:09:31.389 12:07:32 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:31.389 12:07:32 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:09:31.389 12:07:32 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:09:31.389 12:07:32 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:31.389 12:07:32 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:31.389 12:07:32 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:09:31.389 12:07:32 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:09:31.389 12:07:32 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:31.389 12:07:32 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:09:31.389 12:07:32 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:09:31.389 12:07:32 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:09:31.389 12:07:32 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:09:31.389 12:07:32 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:31.389 12:07:32 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:09:31.389 12:07:32 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:09:31.389 12:07:32 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:31.389 12:07:32 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:31.389 12:07:32 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:09:31.389 12:07:32 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:31.389 12:07:32 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:31.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.389 --rc genhtml_branch_coverage=1 00:09:31.389 --rc genhtml_function_coverage=1 00:09:31.389 --rc genhtml_legend=1 00:09:31.389 --rc geninfo_all_blocks=1 00:09:31.389 --rc geninfo_unexecuted_blocks=1 00:09:31.389 00:09:31.389 ' 00:09:31.389 12:07:32 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:31.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.390 --rc genhtml_branch_coverage=1 00:09:31.390 --rc genhtml_function_coverage=1 00:09:31.390 --rc genhtml_legend=1 00:09:31.390 --rc geninfo_all_blocks=1 00:09:31.390 --rc geninfo_unexecuted_blocks=1 00:09:31.390 00:09:31.390 ' 00:09:31.390 12:07:32 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:31.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.390 --rc genhtml_branch_coverage=1 00:09:31.390 --rc genhtml_function_coverage=1 00:09:31.390 --rc genhtml_legend=1 00:09:31.390 --rc geninfo_all_blocks=1 00:09:31.390 --rc geninfo_unexecuted_blocks=1 00:09:31.390 00:09:31.390 ' 00:09:31.390 12:07:32 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:31.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.390 --rc genhtml_branch_coverage=1 00:09:31.390 --rc genhtml_function_coverage=1 00:09:31.390 --rc genhtml_legend=1 00:09:31.390 --rc geninfo_all_blocks=1 00:09:31.390 --rc geninfo_unexecuted_blocks=1 00:09:31.390 00:09:31.390 ' 00:09:31.390 12:07:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:31.390 12:07:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_66026 00:09:31.390 12:07:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_66026 00:09:31.390 12:07:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=66058 00:09:31.390 12:07:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:09:31.390 12:07:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 66058 00:09:31.390 12:07:32 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 66058 ']' 00:09:31.390 12:07:32 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.390 12:07:32 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:31.390 12:07:32 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.390 12:07:32 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:31.390 12:07:32 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:31.390 12:07:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:31.390 [2024-11-25 12:07:32.221523] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:09:31.390 [2024-11-25 12:07:32.221647] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66058 ] 00:09:31.390 [2024-11-25 12:07:32.381264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:31.646 [2024-11-25 12:07:32.480378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:31.646 [2024-11-25 12:07:32.480571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.209 12:07:33 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:32.209 Checking default timeout settings: 00:09:32.209 12:07:33 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:09:32.209 12:07:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:09:32.209 12:07:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:32.465 Making settings changes with rpc: 00:09:32.465 12:07:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:09:32.465 12:07:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:09:32.721 Check default vs. modified settings: 00:09:32.721 12:07:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:09:32.721 12:07:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:32.978 12:07:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:09:32.978 12:07:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:32.978 12:07:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_66026 00:09:32.978 12:07:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:32.978 12:07:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:32.978 12:07:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:09:32.978 12:07:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_66026 00:09:32.978 12:07:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:32.978 12:07:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:32.978 12:07:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:09:32.978 12:07:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:09:32.978 Setting action_on_timeout is changed as expected. 00:09:32.978 12:07:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:09:32.978 12:07:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:32.978 12:07:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_66026 00:09:32.978 12:07:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:32.978 12:07:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:32.978 12:07:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:32.978 12:07:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_66026 00:09:32.978 12:07:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:32.978 12:07:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:32.978 12:07:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:09:32.978 12:07:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:09:32.978 12:07:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:09:32.978 Setting timeout_us is changed as expected. 00:09:32.978 12:07:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:32.978 12:07:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_66026 00:09:32.978 12:07:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:32.978 12:07:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:32.978 12:07:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:32.978 12:07:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_66026 00:09:32.978 12:07:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:32.978 12:07:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:32.978 12:07:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:09:32.978 Setting timeout_admin_us is changed as expected. 00:09:32.978 12:07:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:09:32.978 12:07:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:09:32.978 12:07:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:09:32.978 12:07:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_66026 /tmp/settings_modified_66026 00:09:32.978 12:07:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 66058 00:09:32.978 12:07:33 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 66058 ']' 00:09:32.978 12:07:33 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 66058 00:09:32.978 12:07:33 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:09:32.978 12:07:33 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:32.978 12:07:33 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66058 00:09:32.978 killing process with pid 66058 00:09:32.978 12:07:34 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:32.978 12:07:34 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:32.978 12:07:34 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66058' 00:09:32.978 12:07:34 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 66058 00:09:32.978 12:07:34 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 66058 00:09:34.869 RPC TIMEOUT SETTING TEST PASSED. 00:09:34.869 12:07:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:09:34.869 00:09:34.869 real 0m3.494s 00:09:34.869 user 0m6.813s 00:09:34.869 sys 0m0.496s 00:09:34.869 12:07:35 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:34.869 12:07:35 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:34.869 ************************************ 00:09:34.869 END TEST nvme_rpc_timeouts 00:09:34.869 ************************************ 00:09:34.869 12:07:35 -- spdk/autotest.sh@239 -- # uname -s 00:09:34.869 12:07:35 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:09:34.869 12:07:35 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:34.869 12:07:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:34.869 12:07:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:34.869 12:07:35 -- common/autotest_common.sh@10 -- # set +x 00:09:34.869 ************************************ 00:09:34.869 START TEST sw_hotplug 00:09:34.869 ************************************ 00:09:34.869 12:07:35 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:34.869 * Looking for test storage... 00:09:34.869 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:34.869 12:07:35 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:34.869 12:07:35 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version 00:09:34.869 12:07:35 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:34.869 12:07:35 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:34.869 12:07:35 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:34.869 12:07:35 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:34.869 12:07:35 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:34.869 12:07:35 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:09:34.869 12:07:35 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:09:34.869 12:07:35 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:09:34.869 12:07:35 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:09:34.869 12:07:35 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:09:34.869 12:07:35 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:09:34.869 12:07:35 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:09:34.869 12:07:35 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:34.869 12:07:35 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:09:34.869 12:07:35 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:09:34.869 12:07:35 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:34.869 12:07:35 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:34.869 12:07:35 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:09:34.869 12:07:35 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:09:34.869 12:07:35 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:34.869 12:07:35 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:09:34.869 12:07:35 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:09:34.869 12:07:35 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:09:34.869 12:07:35 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:09:34.869 12:07:35 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:34.869 12:07:35 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:09:34.869 12:07:35 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:09:34.869 12:07:35 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:34.869 12:07:35 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:34.869 12:07:35 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:09:34.869 12:07:35 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:34.869 12:07:35 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:34.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.869 --rc genhtml_branch_coverage=1 00:09:34.869 --rc genhtml_function_coverage=1 00:09:34.869 --rc genhtml_legend=1 00:09:34.869 --rc geninfo_all_blocks=1 00:09:34.869 --rc geninfo_unexecuted_blocks=1 00:09:34.869 00:09:34.869 ' 00:09:34.869 12:07:35 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:34.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.869 --rc genhtml_branch_coverage=1 00:09:34.869 --rc genhtml_function_coverage=1 00:09:34.869 --rc genhtml_legend=1 00:09:34.869 --rc geninfo_all_blocks=1 00:09:34.869 --rc geninfo_unexecuted_blocks=1 00:09:34.869 00:09:34.869 ' 00:09:34.869 12:07:35 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:34.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.869 --rc genhtml_branch_coverage=1 00:09:34.869 --rc genhtml_function_coverage=1 00:09:34.869 --rc genhtml_legend=1 00:09:34.869 --rc geninfo_all_blocks=1 00:09:34.869 --rc geninfo_unexecuted_blocks=1 00:09:34.869 00:09:34.869 ' 00:09:34.869 12:07:35 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:34.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.869 --rc genhtml_branch_coverage=1 00:09:34.869 --rc genhtml_function_coverage=1 00:09:34.869 --rc genhtml_legend=1 00:09:34.869 --rc geninfo_all_blocks=1 00:09:34.869 --rc geninfo_unexecuted_blocks=1 00:09:34.869 00:09:34.869 ' 00:09:34.869 12:07:35 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:35.127 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:35.127 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:35.127 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:35.127 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:35.127 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:35.127 12:07:36 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:09:35.127 12:07:36 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:09:35.127 12:07:36 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:09:35.128 12:07:36 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:09:35.128 12:07:36 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:09:35.128 12:07:36 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:09:35.128 12:07:36 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:09:35.128 12:07:36 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:09:35.128 12:07:36 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:09:35.128 12:07:36 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:09:35.128 12:07:36 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:09:35.128 12:07:36 sw_hotplug -- scripts/common.sh@233 -- # local class 00:09:35.128 12:07:36 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:09:35.128 12:07:36 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:09:35.128 12:07:36 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:09:35.128 12:07:36 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:09:35.128 12:07:36 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:09:35.128 12:07:36 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:09:35.128 12:07:36 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:09:35.128 12:07:36 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:09:35.128 12:07:36 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:09:35.128 12:07:36 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:09:35.128 12:07:36 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:09:35.128 12:07:36 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:09:35.128 12:07:36 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:09:35.128 12:07:36 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:09:35.128 12:07:36 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:35.128 12:07:36 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:09:35.128 12:07:36 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:35.128 12:07:36 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:35.128 12:07:36 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:35.128 12:07:36 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:35.128 12:07:36 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:09:35.128 12:07:36 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:35.128 12:07:36 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:09:35.128 12:07:36 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:35.128 12:07:36 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:35.128 12:07:36 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:35.128 12:07:36 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:35.128 12:07:36 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:09:35.128 12:07:36 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:35.128 12:07:36 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:09:35.128 12:07:36 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:35.128 12:07:36 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:35.128 12:07:36 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:35.128 12:07:36 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:35.128 12:07:36 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:09:35.128 12:07:36 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:35.128 12:07:36 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:09:35.128 12:07:36 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:35.128 12:07:36 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:35.128 12:07:36 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:35.128 12:07:36 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:35.128 12:07:36 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:09:35.128 12:07:36 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:35.128 12:07:36 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:09:35.128 12:07:36 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:35.128 12:07:36 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:35.128 12:07:36 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:35.128 12:07:36 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:35.128 12:07:36 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:09:35.128 12:07:36 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:35.128 12:07:36 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:35.128 12:07:36 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:35.128 12:07:36 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:35.128 12:07:36 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:09:35.128 12:07:36 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:35.128 12:07:36 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:35.128 12:07:36 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:35.128 12:07:36 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:35.128 12:07:36 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:09:35.128 12:07:36 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:35.128 12:07:36 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:35.128 12:07:36 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:35.128 12:07:36 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:09:35.128 12:07:36 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:35.128 12:07:36 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:09:35.128 12:07:36 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:09:35.128 12:07:36 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:35.385 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:35.642 Waiting for block devices as requested 00:09:35.642 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:35.642 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:35.899 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:35.899 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:41.157 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:41.157 12:07:41 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:09:41.157 12:07:41 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:41.157 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:09:41.415 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:41.415 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:09:41.673 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:09:41.673 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:41.673 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:41.931 12:07:42 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:09:41.931 12:07:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:41.931 12:07:42 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:09:41.931 12:07:42 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:09:41.931 12:07:42 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=66914 00:09:41.931 12:07:42 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:09:41.931 12:07:42 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:09:41.931 12:07:42 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:09:41.931 12:07:42 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:09:41.931 12:07:42 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:09:41.931 12:07:42 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:09:41.931 12:07:42 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:09:41.931 12:07:42 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:09:41.931 12:07:42 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:09:41.931 12:07:42 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:09:41.931 12:07:42 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:09:41.931 12:07:42 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:09:41.931 12:07:42 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:09:41.931 12:07:42 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:09:42.189 Initializing NVMe Controllers 00:09:42.189 Attaching to 0000:00:10.0 00:09:42.189 Attaching to 0000:00:11.0 00:09:42.189 Attached to 0000:00:10.0 00:09:42.189 Attached to 0000:00:11.0 00:09:42.189 Initialization complete. Starting I/O... 00:09:42.189 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:09:42.189 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:09:42.189 00:09:43.123 QEMU NVMe Ctrl (12340 ): 2546 I/Os completed (+2546) 00:09:43.123 QEMU NVMe Ctrl (12341 ): 2667 I/Os completed (+2667) 00:09:43.123 00:09:44.057 QEMU NVMe Ctrl (12340 ): 5757 I/Os completed (+3211) 00:09:44.057 QEMU NVMe Ctrl (12341 ): 6340 I/Os completed (+3673) 00:09:44.057 00:09:44.989 QEMU NVMe Ctrl (12340 ): 9100 I/Os completed (+3343) 00:09:44.989 QEMU NVMe Ctrl (12341 ): 9954 I/Os completed (+3614) 00:09:44.989 00:09:46.364 QEMU NVMe Ctrl (12340 ): 12202 I/Os completed (+3102) 00:09:46.364 QEMU NVMe Ctrl (12341 ): 13314 I/Os completed (+3360) 00:09:46.364 00:09:47.298 QEMU NVMe Ctrl (12340 ): 15241 I/Os completed (+3039) 00:09:47.298 QEMU NVMe Ctrl (12341 ): 16526 I/Os completed (+3212) 00:09:47.298 00:09:47.863 12:07:48 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:47.863 12:07:48 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:47.863 12:07:48 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:47.863 [2024-11-25 12:07:48.853517] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:47.863 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:47.863 [2024-11-25 12:07:48.854907] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.863 [2024-11-25 12:07:48.855028] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.863 [2024-11-25 12:07:48.855081] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.863 [2024-11-25 12:07:48.855122] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.863 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:47.863 [2024-11-25 12:07:48.856779] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.863 [2024-11-25 12:07:48.856897] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.863 [2024-11-25 12:07:48.856913] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.863 [2024-11-25 12:07:48.856925] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.863 12:07:48 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:47.863 12:07:48 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:47.863 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:10.0/subsystem_device 00:09:47.863 EAL: Scan for (pci) bus failed. 00:09:47.863 [2024-11-25 12:07:48.886146] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:47.863 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:47.863 [2024-11-25 12:07:48.887083] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.863 [2024-11-25 12:07:48.887123] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.863 [2024-11-25 12:07:48.887144] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.863 [2024-11-25 12:07:48.887158] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.863 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:47.863 [2024-11-25 12:07:48.888634] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.863 [2024-11-25 12:07:48.888671] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.863 [2024-11-25 12:07:48.888688] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.863 [2024-11-25 12:07:48.888698] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.863 12:07:48 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:47.863 12:07:48 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:48.121 12:07:49 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:48.121 12:07:49 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:48.121 12:07:49 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:48.121 00:09:48.121 12:07:49 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:48.121 12:07:49 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:48.121 12:07:49 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:48.121 12:07:49 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:48.121 12:07:49 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:48.121 Attaching to 0000:00:10.0 00:09:48.121 Attached to 0000:00:10.0 00:09:48.121 12:07:49 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:48.121 12:07:49 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:48.121 12:07:49 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:48.121 Attaching to 0000:00:11.0 00:09:48.121 Attached to 0000:00:11.0 00:09:49.054 QEMU NVMe Ctrl (12340 ): 3210 I/Os completed (+3210) 00:09:49.054 QEMU NVMe Ctrl (12341 ): 3045 I/Os completed (+3045) 00:09:49.054 00:09:49.984 QEMU NVMe Ctrl (12340 ): 6716 I/Os completed (+3506) 00:09:49.984 QEMU NVMe Ctrl (12341 ): 6924 I/Os completed (+3879) 00:09:49.984 00:09:51.356 QEMU NVMe Ctrl (12340 ): 10281 I/Os completed (+3565) 00:09:51.356 QEMU NVMe Ctrl (12341 ): 10579 I/Os completed (+3655) 00:09:51.356 00:09:52.291 QEMU NVMe Ctrl (12340 ): 13627 I/Os completed (+3346) 00:09:52.291 QEMU NVMe Ctrl (12341 ): 14214 I/Os completed (+3635) 00:09:52.291 00:09:53.230 QEMU NVMe Ctrl (12340 ): 16726 I/Os completed (+3099) 00:09:53.230 QEMU NVMe Ctrl (12341 ): 17423 I/Os completed (+3209) 00:09:53.230 00:09:54.174 QEMU NVMe Ctrl (12340 ): 19795 I/Os completed (+3069) 00:09:54.174 QEMU NVMe Ctrl (12341 ): 20557 I/Os completed (+3134) 00:09:54.174 00:09:55.111 QEMU NVMe Ctrl (12340 ): 22664 I/Os completed (+2869) 00:09:55.111 QEMU NVMe Ctrl (12341 ): 23454 I/Os completed (+2897) 00:09:55.111 00:09:56.051 QEMU NVMe Ctrl (12340 ): 25534 I/Os completed (+2870) 00:09:56.051 QEMU NVMe Ctrl (12341 ): 26393 I/Os completed (+2939) 00:09:56.051 00:09:57.052 QEMU NVMe Ctrl (12340 ): 28646 I/Os completed (+3112) 00:09:57.052 QEMU NVMe Ctrl (12341 ): 29561 I/Os completed (+3168) 00:09:57.052 00:09:57.991 QEMU NVMe Ctrl (12340 ): 31596 I/Os completed (+2950) 00:09:57.991 QEMU NVMe Ctrl (12341 ): 32545 I/Os completed (+2984) 00:09:57.991 00:09:59.377 QEMU NVMe Ctrl (12340 ): 34530 I/Os completed (+2934) 00:09:59.377 QEMU NVMe Ctrl (12341 ): 35596 I/Os completed (+3051) 00:09:59.377 00:10:00.315 QEMU NVMe Ctrl (12340 ): 37498 I/Os completed (+2968) 00:10:00.315 QEMU NVMe Ctrl (12341 ): 38678 I/Os completed (+3082) 00:10:00.315 00:10:00.315 12:08:01 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:00.315 12:08:01 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:00.315 12:08:01 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:00.315 12:08:01 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:00.315 [2024-11-25 12:08:01.179862] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:00.315 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:00.315 [2024-11-25 12:08:01.181323] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.315 [2024-11-25 12:08:01.181370] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.315 [2024-11-25 12:08:01.181388] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.315 [2024-11-25 12:08:01.181405] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.315 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:00.315 [2024-11-25 12:08:01.183490] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.315 [2024-11-25 12:08:01.183602] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.315 [2024-11-25 12:08:01.183666] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.315 [2024-11-25 12:08:01.183697] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.315 12:08:01 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:00.315 12:08:01 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:00.315 [2024-11-25 12:08:01.208186] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:00.315 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:00.315 [2024-11-25 12:08:01.209357] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.315 [2024-11-25 12:08:01.209434] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.315 [2024-11-25 12:08:01.209473] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.315 [2024-11-25 12:08:01.209490] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.315 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:00.315 [2024-11-25 12:08:01.211173] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.315 [2024-11-25 12:08:01.211223] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.316 [2024-11-25 12:08:01.211247] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.316 [2024-11-25 12:08:01.211265] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.316 12:08:01 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:00.316 12:08:01 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:00.316 12:08:01 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:00.316 12:08:01 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:00.316 12:08:01 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:00.575 12:08:01 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:00.575 12:08:01 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:00.575 12:08:01 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:00.575 12:08:01 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:00.575 12:08:01 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:00.575 Attaching to 0000:00:10.0 00:10:00.575 Attached to 0000:00:10.0 00:10:00.575 12:08:01 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:00.575 12:08:01 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:00.575 12:08:01 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:00.575 Attaching to 0000:00:11.0 00:10:00.575 Attached to 0000:00:11.0 00:10:01.141 QEMU NVMe Ctrl (12340 ): 1793 I/Os completed (+1793) 00:10:01.141 QEMU NVMe Ctrl (12341 ): 1671 I/Os completed (+1671) 00:10:01.141 00:10:02.078 QEMU NVMe Ctrl (12340 ): 5292 I/Os completed (+3499) 00:10:02.078 QEMU NVMe Ctrl (12341 ): 5243 I/Os completed (+3572) 00:10:02.078 00:10:03.012 QEMU NVMe Ctrl (12340 ): 8480 I/Os completed (+3188) 00:10:03.012 QEMU NVMe Ctrl (12341 ): 8551 I/Os completed (+3308) 00:10:03.012 00:10:04.389 QEMU NVMe Ctrl (12340 ): 11273 I/Os completed (+2793) 00:10:04.389 QEMU NVMe Ctrl (12341 ): 11378 I/Os completed (+2827) 00:10:04.389 00:10:05.322 QEMU NVMe Ctrl (12340 ): 14339 I/Os completed (+3066) 00:10:05.322 QEMU NVMe Ctrl (12341 ): 14639 I/Os completed (+3261) 00:10:05.322 00:10:06.259 QEMU NVMe Ctrl (12340 ): 17346 I/Os completed (+3007) 00:10:06.259 QEMU NVMe Ctrl (12341 ): 17661 I/Os completed (+3022) 00:10:06.259 00:10:07.199 QEMU NVMe Ctrl (12340 ): 20466 I/Os completed (+3120) 00:10:07.199 QEMU NVMe Ctrl (12341 ): 20969 I/Os completed (+3308) 00:10:07.199 00:10:08.159 QEMU NVMe Ctrl (12340 ): 23203 I/Os completed (+2737) 00:10:08.159 QEMU NVMe Ctrl (12341 ): 23807 I/Os completed (+2838) 00:10:08.159 00:10:09.142 QEMU NVMe Ctrl (12340 ): 26570 I/Os completed (+3367) 00:10:09.142 QEMU NVMe Ctrl (12341 ): 27500 I/Os completed (+3693) 00:10:09.142 00:10:10.075 QEMU NVMe Ctrl (12340 ): 29738 I/Os completed (+3168) 00:10:10.075 QEMU NVMe Ctrl (12341 ): 30916 I/Os completed (+3416) 00:10:10.075 00:10:11.007 QEMU NVMe Ctrl (12340 ): 33201 I/Os completed (+3463) 00:10:11.007 QEMU NVMe Ctrl (12341 ): 34397 I/Os completed (+3481) 00:10:11.007 00:10:12.401 QEMU NVMe Ctrl (12340 ): 36449 I/Os completed (+3248) 00:10:12.401 QEMU NVMe Ctrl (12341 ): 37744 I/Os completed (+3347) 00:10:12.401 00:10:12.690 12:08:13 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:12.690 12:08:13 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:12.690 12:08:13 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:12.690 12:08:13 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:12.690 [2024-11-25 12:08:13.552933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:12.690 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:12.690 [2024-11-25 12:08:13.554274] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.690 [2024-11-25 12:08:13.554402] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.690 [2024-11-25 12:08:13.554438] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.690 [2024-11-25 12:08:13.554512] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.690 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:12.690 [2024-11-25 12:08:13.556579] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.690 [2024-11-25 12:08:13.556648] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.690 [2024-11-25 12:08:13.556677] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.690 [2024-11-25 12:08:13.556705] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.690 12:08:13 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:12.690 12:08:13 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:12.690 [2024-11-25 12:08:13.582183] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:12.690 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:12.690 [2024-11-25 12:08:13.583639] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.690 [2024-11-25 12:08:13.583783] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.690 [2024-11-25 12:08:13.583828] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.690 [2024-11-25 12:08:13.583908] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.690 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:12.690 [2024-11-25 12:08:13.585774] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.690 [2024-11-25 12:08:13.585872] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.690 [2024-11-25 12:08:13.585957] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.690 [2024-11-25 12:08:13.585989] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.691 12:08:13 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:12.691 12:08:13 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:12.691 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:12.691 EAL: Scan for (pci) bus failed. 00:10:12.691 12:08:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:12.691 12:08:13 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:12.691 12:08:13 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:12.691 12:08:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:12.691 12:08:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:12.691 12:08:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:12.691 12:08:13 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:12.691 12:08:13 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:12.691 Attaching to 0000:00:10.0 00:10:12.691 Attached to 0000:00:10.0 00:10:12.948 12:08:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:12.948 12:08:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:12.948 12:08:13 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:12.948 Attaching to 0000:00:11.0 00:10:12.948 Attached to 0000:00:11.0 00:10:12.948 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:12.948 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:12.948 [2024-11-25 12:08:13.848557] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:10:25.139 12:08:25 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:25.139 12:08:25 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:25.139 12:08:25 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.99 00:10:25.139 12:08:25 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.99 00:10:25.139 12:08:25 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:10:25.139 12:08:25 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.99 00:10:25.139 12:08:25 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.99 2 00:10:25.139 remove_attach_helper took 42.99s to complete (handling 2 nvme drive(s)) 12:08:25 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:10:31.752 12:08:31 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 66914 00:10:31.752 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (66914) - No such process 00:10:31.752 12:08:31 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 66914 00:10:31.752 12:08:31 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:10:31.752 12:08:31 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:10:31.752 12:08:31 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:10:31.752 12:08:31 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=67464 00:10:31.752 12:08:31 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:10:31.752 12:08:31 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 67464 00:10:31.752 12:08:31 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 67464 ']' 00:10:31.752 12:08:31 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:31.752 12:08:31 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:31.752 12:08:31 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:31.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:31.752 12:08:31 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:31.752 12:08:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:31.753 12:08:31 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:31.753 [2024-11-25 12:08:31.927477] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:10:31.753 [2024-11-25 12:08:31.927591] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67464 ] 00:10:31.753 [2024-11-25 12:08:32.084418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:31.753 [2024-11-25 12:08:32.171190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.753 12:08:32 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:31.753 12:08:32 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:10:31.753 12:08:32 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:31.753 12:08:32 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.753 12:08:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:31.753 12:08:32 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.753 12:08:32 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:10:31.753 12:08:32 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:31.753 12:08:32 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:32.010 12:08:32 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:32.010 12:08:32 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:32.010 12:08:32 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:32.010 12:08:32 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:32.010 12:08:32 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:10:32.010 12:08:32 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:32.010 12:08:32 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:32.010 12:08:32 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:32.010 12:08:32 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:32.010 12:08:32 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:38.560 12:08:38 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:38.560 12:08:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:38.560 12:08:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:38.560 12:08:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:38.560 12:08:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:38.560 12:08:38 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:38.560 12:08:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:38.560 12:08:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:38.560 12:08:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:38.560 12:08:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:38.561 12:08:38 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.561 12:08:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:38.561 12:08:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:38.561 12:08:38 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.561 12:08:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:38.561 12:08:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:38.561 [2024-11-25 12:08:38.922970] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:38.561 [2024-11-25 12:08:38.924344] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:38.561 [2024-11-25 12:08:38.924382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:38.561 [2024-11-25 12:08:38.924396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:38.561 [2024-11-25 12:08:38.924415] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:38.561 [2024-11-25 12:08:38.924422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:38.561 [2024-11-25 12:08:38.924431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:38.561 [2024-11-25 12:08:38.924438] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:38.561 [2024-11-25 12:08:38.924447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:38.561 [2024-11-25 12:08:38.924454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:38.561 [2024-11-25 12:08:38.924465] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:38.561 [2024-11-25 12:08:38.924471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:38.561 [2024-11-25 12:08:38.924479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:38.561 12:08:39 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:38.561 12:08:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:38.561 12:08:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:38.561 12:08:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:38.561 12:08:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:38.561 12:08:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:38.561 12:08:39 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.561 12:08:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:38.561 [2024-11-25 12:08:39.422961] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:38.561 [2024-11-25 12:08:39.424322] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:38.561 [2024-11-25 12:08:39.424355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:38.561 [2024-11-25 12:08:39.424368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:38.561 [2024-11-25 12:08:39.424384] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:38.561 [2024-11-25 12:08:39.424393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:38.561 [2024-11-25 12:08:39.424400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:38.561 [2024-11-25 12:08:39.424410] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:38.561 [2024-11-25 12:08:39.424417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:38.561 [2024-11-25 12:08:39.424425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:38.561 [2024-11-25 12:08:39.424433] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:38.561 [2024-11-25 12:08:39.424441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:38.561 [2024-11-25 12:08:39.424448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:38.561 12:08:39 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.561 12:08:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:10:38.561 12:08:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:39.130 12:08:39 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:10:39.130 12:08:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:39.130 12:08:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:39.130 12:08:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:39.130 12:08:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:39.130 12:08:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:39.130 12:08:39 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.130 12:08:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:39.130 12:08:39 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.130 12:08:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:39.130 12:08:39 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:39.130 12:08:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:39.130 12:08:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:39.130 12:08:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:39.130 12:08:40 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:39.130 12:08:40 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:39.130 12:08:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:39.130 12:08:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:39.130 12:08:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:39.392 12:08:40 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:39.392 12:08:40 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:39.392 12:08:40 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:51.585 12:08:52 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:51.585 12:08:52 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:51.585 12:08:52 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:51.585 12:08:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:51.585 12:08:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:51.585 12:08:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:51.585 12:08:52 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.585 12:08:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:51.585 12:08:52 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.585 12:08:52 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:51.585 12:08:52 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:51.585 12:08:52 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:51.585 12:08:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:51.585 12:08:52 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:51.585 12:08:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:51.585 12:08:52 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:51.585 12:08:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:51.585 12:08:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:51.585 12:08:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:51.585 12:08:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:51.585 12:08:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:51.585 12:08:52 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.585 12:08:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:51.585 [2024-11-25 12:08:52.323185] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:51.585 [2024-11-25 12:08:52.324676] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.585 [2024-11-25 12:08:52.324715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:51.585 [2024-11-25 12:08:52.324727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:51.585 [2024-11-25 12:08:52.324745] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.585 [2024-11-25 12:08:52.324753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:51.585 [2024-11-25 12:08:52.324761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:51.585 [2024-11-25 12:08:52.324769] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.585 [2024-11-25 12:08:52.324777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:51.585 [2024-11-25 12:08:52.324784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:51.585 [2024-11-25 12:08:52.324793] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.585 [2024-11-25 12:08:52.324800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:51.585 [2024-11-25 12:08:52.324809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:51.585 12:08:52 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.585 12:08:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:51.585 12:08:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:51.843 12:08:52 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:51.843 12:08:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:51.843 12:08:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:51.843 12:08:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:51.843 12:08:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:51.843 12:08:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:51.843 12:08:52 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.843 12:08:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:51.843 12:08:52 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.843 12:08:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:10:51.843 12:08:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:52.102 [2024-11-25 12:08:52.923197] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:52.102 [2024-11-25 12:08:52.924610] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.102 [2024-11-25 12:08:52.924645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:52.102 [2024-11-25 12:08:52.924659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.102 [2024-11-25 12:08:52.924676] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.102 [2024-11-25 12:08:52.924686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:52.102 [2024-11-25 12:08:52.924693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.102 [2024-11-25 12:08:52.924703] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.102 [2024-11-25 12:08:52.924710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:52.102 [2024-11-25 12:08:52.924718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.102 [2024-11-25 12:08:52.924725] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.102 [2024-11-25 12:08:52.924734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:52.102 [2024-11-25 12:08:52.924740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.359 12:08:53 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:10:52.359 12:08:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:52.359 12:08:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:52.360 12:08:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:52.360 12:08:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:52.360 12:08:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:52.360 12:08:53 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.360 12:08:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:52.360 12:08:53 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.360 12:08:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:52.360 12:08:53 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:52.617 12:08:53 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:52.617 12:08:53 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:52.617 12:08:53 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:52.617 12:08:53 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:52.617 12:08:53 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:52.617 12:08:53 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:52.617 12:08:53 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:52.617 12:08:53 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:52.617 12:08:53 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:52.617 12:08:53 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:52.617 12:08:53 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:04.840 12:09:05 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:04.840 12:09:05 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:04.840 12:09:05 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:04.840 12:09:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:04.840 12:09:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:04.840 12:09:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:04.840 12:09:05 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.840 12:09:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:04.840 12:09:05 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.840 12:09:05 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:04.840 12:09:05 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:04.840 12:09:05 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:04.840 12:09:05 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:04.840 [2024-11-25 12:09:05.723397] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:04.840 [2024-11-25 12:09:05.725037] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.840 [2024-11-25 12:09:05.725150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:04.840 [2024-11-25 12:09:05.725216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.840 [2024-11-25 12:09:05.725317] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.840 [2024-11-25 12:09:05.725338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:04.840 [2024-11-25 12:09:05.725367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.840 [2024-11-25 12:09:05.725427] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.840 [2024-11-25 12:09:05.725448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:04.840 [2024-11-25 12:09:05.725473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.840 [2024-11-25 12:09:05.725530] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.840 [2024-11-25 12:09:05.725550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:04.840 [2024-11-25 12:09:05.725576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.840 12:09:05 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:04.840 12:09:05 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:04.840 12:09:05 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:04.840 12:09:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:04.840 12:09:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:04.840 12:09:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:04.840 12:09:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:04.840 12:09:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:04.840 12:09:05 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.840 12:09:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:04.840 12:09:05 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.840 12:09:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:04.840 12:09:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:05.097 [2024-11-25 12:09:06.123400] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:05.097 [2024-11-25 12:09:06.124861] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:05.097 [2024-11-25 12:09:06.124986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:05.097 [2024-11-25 12:09:06.125075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.097 [2024-11-25 12:09:06.125110] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:05.097 [2024-11-25 12:09:06.125130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:05.097 [2024-11-25 12:09:06.125155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.097 [2024-11-25 12:09:06.125183] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:05.097 [2024-11-25 12:09:06.125201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:05.097 [2024-11-25 12:09:06.125260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.097 [2024-11-25 12:09:06.125286] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:05.097 [2024-11-25 12:09:06.125338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:05.097 [2024-11-25 12:09:06.125365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.355 12:09:06 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:05.355 12:09:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:05.356 12:09:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:05.356 12:09:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:05.356 12:09:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:05.356 12:09:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:05.356 12:09:06 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.356 12:09:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:05.356 12:09:06 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.356 12:09:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:05.356 12:09:06 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:05.356 12:09:06 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:05.356 12:09:06 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:05.356 12:09:06 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:05.613 12:09:06 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:05.613 12:09:06 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:05.613 12:09:06 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:05.613 12:09:06 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:05.613 12:09:06 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:05.613 12:09:06 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:05.613 12:09:06 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:05.613 12:09:06 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:17.806 12:09:18 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:17.806 12:09:18 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:17.806 12:09:18 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:17.806 12:09:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:17.806 12:09:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:17.806 12:09:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:17.806 12:09:18 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.806 12:09:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:17.806 12:09:18 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.806 12:09:18 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:17.806 12:09:18 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:17.806 12:09:18 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.75 00:11:17.806 12:09:18 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.75 00:11:17.806 12:09:18 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:17.806 12:09:18 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.75 00:11:17.806 12:09:18 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.75 2 00:11:17.806 remove_attach_helper took 45.75s to complete (handling 2 nvme drive(s)) 12:09:18 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:17.806 12:09:18 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.806 12:09:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:17.806 12:09:18 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.806 12:09:18 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:17.806 12:09:18 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.806 12:09:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:17.806 12:09:18 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.806 12:09:18 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:11:17.806 12:09:18 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:17.806 12:09:18 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:17.806 12:09:18 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:17.806 12:09:18 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:17.806 12:09:18 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:17.806 12:09:18 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:17.806 12:09:18 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:11:17.806 12:09:18 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:17.806 12:09:18 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:17.806 12:09:18 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:17.806 12:09:18 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:17.806 12:09:18 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:24.447 12:09:24 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:24.447 12:09:24 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:24.447 12:09:24 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:24.447 12:09:24 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:24.447 12:09:24 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:24.447 12:09:24 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:24.447 12:09:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:24.447 12:09:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:24.447 12:09:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:24.447 12:09:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:24.447 12:09:24 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.447 12:09:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:24.447 12:09:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:24.447 12:09:24 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.447 12:09:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:24.447 12:09:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:24.447 [2024-11-25 12:09:24.707691] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:24.447 [2024-11-25 12:09:24.708759] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.447 [2024-11-25 12:09:24.708999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:24.447 [2024-11-25 12:09:24.709018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.447 [2024-11-25 12:09:24.709037] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.447 [2024-11-25 12:09:24.709045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:24.447 [2024-11-25 12:09:24.709053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.447 [2024-11-25 12:09:24.709061] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.447 [2024-11-25 12:09:24.709069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:24.447 [2024-11-25 12:09:24.709076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.447 [2024-11-25 12:09:24.709084] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.447 [2024-11-25 12:09:24.709091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:24.447 [2024-11-25 12:09:24.709101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.447 12:09:25 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:24.447 12:09:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:24.447 12:09:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:24.447 12:09:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:24.447 12:09:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:24.447 12:09:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:24.447 12:09:25 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.447 12:09:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:24.447 [2024-11-25 12:09:25.207686] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:24.447 [2024-11-25 12:09:25.208693] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.447 [2024-11-25 12:09:25.208722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:24.447 [2024-11-25 12:09:25.208733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.447 [2024-11-25 12:09:25.208748] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.447 [2024-11-25 12:09:25.208756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:24.447 [2024-11-25 12:09:25.208763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.447 [2024-11-25 12:09:25.208772] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.447 [2024-11-25 12:09:25.208779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:24.447 [2024-11-25 12:09:25.208787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.447 [2024-11-25 12:09:25.208794] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.447 [2024-11-25 12:09:25.208802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:24.447 [2024-11-25 12:09:25.208808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.447 12:09:25 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.447 12:09:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:24.447 12:09:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:24.705 12:09:25 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:24.705 12:09:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:24.705 12:09:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:24.705 12:09:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:24.705 12:09:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:24.705 12:09:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:24.705 12:09:25 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.705 12:09:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:24.705 12:09:25 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.705 12:09:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:24.705 12:09:25 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:24.962 12:09:25 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:24.962 12:09:25 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:24.962 12:09:25 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:24.962 12:09:25 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:24.962 12:09:25 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:24.962 12:09:25 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:24.962 12:09:25 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:24.962 12:09:25 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:24.962 12:09:25 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:24.962 12:09:25 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:24.962 12:09:25 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:37.154 12:09:37 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:37.154 12:09:37 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:37.154 12:09:37 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:37.154 12:09:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:37.154 12:09:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:37.154 12:09:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:37.154 12:09:37 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.154 12:09:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:37.154 12:09:38 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.154 12:09:38 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:37.154 12:09:38 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:37.154 12:09:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:37.154 12:09:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:37.154 12:09:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:37.154 12:09:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:37.154 12:09:38 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:37.154 12:09:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:37.154 12:09:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:37.154 12:09:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:37.154 12:09:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:37.154 12:09:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:37.154 12:09:38 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.154 12:09:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:37.154 12:09:38 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.154 12:09:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:37.154 12:09:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:37.154 [2024-11-25 12:09:38.107937] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:37.154 [2024-11-25 12:09:38.109054] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:37.154 [2024-11-25 12:09:38.109087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:37.154 [2024-11-25 12:09:38.109098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.154 [2024-11-25 12:09:38.109117] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:37.154 [2024-11-25 12:09:38.109125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:37.154 [2024-11-25 12:09:38.109134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.154 [2024-11-25 12:09:38.109142] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:37.154 [2024-11-25 12:09:38.109151] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:37.154 [2024-11-25 12:09:38.109158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.154 [2024-11-25 12:09:38.109167] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:37.154 [2024-11-25 12:09:38.109174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:37.154 [2024-11-25 12:09:38.109182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.761 [2024-11-25 12:09:38.507956] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:37.761 [2024-11-25 12:09:38.509019] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:37.761 [2024-11-25 12:09:38.509047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:37.761 [2024-11-25 12:09:38.509060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.761 [2024-11-25 12:09:38.509075] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:37.761 [2024-11-25 12:09:38.509086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:37.761 [2024-11-25 12:09:38.509094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.761 [2024-11-25 12:09:38.509102] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:37.761 [2024-11-25 12:09:38.509109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:37.761 [2024-11-25 12:09:38.509118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.761 [2024-11-25 12:09:38.509126] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:37.761 [2024-11-25 12:09:38.509134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:37.761 [2024-11-25 12:09:38.509141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.761 12:09:38 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:37.761 12:09:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:37.761 12:09:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:37.761 12:09:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:37.761 12:09:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:37.761 12:09:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:37.761 12:09:38 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.761 12:09:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:37.761 12:09:38 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.761 12:09:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:37.761 12:09:38 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:37.761 12:09:38 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:37.761 12:09:38 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:37.761 12:09:38 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:37.761 12:09:38 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:37.761 12:09:38 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:37.761 12:09:38 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:37.761 12:09:38 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:37.761 12:09:38 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:38.019 12:09:38 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:38.019 12:09:38 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:38.019 12:09:38 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:50.210 12:09:50 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:50.210 12:09:50 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:50.210 12:09:50 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:50.210 12:09:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:50.210 12:09:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:50.210 12:09:50 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.210 12:09:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:50.210 12:09:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:50.210 12:09:50 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.210 12:09:50 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:50.210 12:09:50 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:50.210 12:09:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:50.210 12:09:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:50.210 12:09:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:50.210 12:09:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:50.210 12:09:50 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:50.210 12:09:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:50.210 12:09:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:50.210 12:09:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:50.210 12:09:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:50.210 12:09:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:50.210 12:09:50 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.210 12:09:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:50.210 12:09:50 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.210 12:09:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:50.210 12:09:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:50.210 [2024-11-25 12:09:51.008161] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:50.210 [2024-11-25 12:09:51.010882] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:50.210 [2024-11-25 12:09:51.010921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:50.210 [2024-11-25 12:09:51.010933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:50.210 [2024-11-25 12:09:51.010967] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:50.210 [2024-11-25 12:09:51.010976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:50.210 [2024-11-25 12:09:51.010985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:50.210 [2024-11-25 12:09:51.010992] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:50.210 [2024-11-25 12:09:51.011003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:50.210 [2024-11-25 12:09:51.011010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:50.210 [2024-11-25 12:09:51.011018] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:50.210 [2024-11-25 12:09:51.011024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:50.210 [2024-11-25 12:09:51.011032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:50.469 [2024-11-25 12:09:51.408171] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:50.469 [2024-11-25 12:09:51.409254] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:50.469 [2024-11-25 12:09:51.409279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:50.469 [2024-11-25 12:09:51.409291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:50.469 [2024-11-25 12:09:51.409306] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:50.469 [2024-11-25 12:09:51.409316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:50.469 [2024-11-25 12:09:51.409323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:50.469 [2024-11-25 12:09:51.409332] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:50.469 [2024-11-25 12:09:51.409340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:50.469 [2024-11-25 12:09:51.409349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:50.469 [2024-11-25 12:09:51.409357] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:50.469 [2024-11-25 12:09:51.409368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:50.469 [2024-11-25 12:09:51.409375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:50.469 12:09:51 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:50.469 12:09:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:50.469 12:09:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:50.469 12:09:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:50.469 12:09:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:50.469 12:09:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:50.469 12:09:51 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.469 12:09:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:50.469 12:09:51 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.727 12:09:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:50.727 12:09:51 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:50.727 12:09:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:50.727 12:09:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:50.727 12:09:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:50.727 12:09:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:50.727 12:09:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:50.727 12:09:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:50.727 12:09:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:50.727 12:09:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:50.727 12:09:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:50.727 12:09:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:50.727 12:09:51 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:02.965 12:10:03 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:02.965 12:10:03 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:02.965 12:10:03 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:02.965 12:10:03 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:02.965 12:10:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:02.965 12:10:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:02.965 12:10:03 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.965 12:10:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:02.965 12:10:03 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.965 12:10:03 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:02.965 12:10:03 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:02.965 12:10:03 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.21 00:12:02.965 12:10:03 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.21 00:12:02.965 12:10:03 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:12:02.965 12:10:03 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.21 00:12:02.965 12:10:03 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.21 2 00:12:02.965 remove_attach_helper took 45.21s to complete (handling 2 nvme drive(s)) 12:10:03 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:12:02.965 12:10:03 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 67464 00:12:02.965 12:10:03 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 67464 ']' 00:12:02.965 12:10:03 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 67464 00:12:02.965 12:10:03 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:12:02.965 12:10:03 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:02.965 12:10:03 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67464 00:12:02.965 killing process with pid 67464 00:12:02.965 12:10:03 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:02.965 12:10:03 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:02.965 12:10:03 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67464' 00:12:02.965 12:10:03 sw_hotplug -- common/autotest_common.sh@973 -- # kill 67464 00:12:02.965 12:10:03 sw_hotplug -- common/autotest_common.sh@978 -- # wait 67464 00:12:04.338 12:10:05 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:04.338 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:04.902 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:04.902 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:04.902 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:04.902 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:04.902 00:12:04.902 real 2m30.324s 00:12:04.902 user 1m52.347s 00:12:04.902 sys 0m16.966s 00:12:04.902 ************************************ 00:12:04.902 END TEST sw_hotplug 00:12:04.902 ************************************ 00:12:04.902 12:10:05 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:04.902 12:10:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:04.902 12:10:05 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:12:04.902 12:10:05 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:04.902 12:10:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:04.902 12:10:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:04.902 12:10:05 -- common/autotest_common.sh@10 -- # set +x 00:12:04.902 ************************************ 00:12:04.902 START TEST nvme_xnvme 00:12:04.902 ************************************ 00:12:04.902 12:10:05 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:04.902 * Looking for test storage... 00:12:05.162 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:05.162 12:10:05 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:05.162 12:10:05 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:12:05.162 12:10:05 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:05.162 12:10:06 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:05.162 12:10:06 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:05.162 12:10:06 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:05.162 12:10:06 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:05.162 12:10:06 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:05.162 12:10:06 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:05.162 12:10:06 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:05.162 12:10:06 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:05.162 12:10:06 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:05.162 12:10:06 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:05.162 12:10:06 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:05.162 12:10:06 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:05.162 12:10:06 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:05.162 12:10:06 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:05.162 12:10:06 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:05.162 12:10:06 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:05.162 12:10:06 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:05.162 12:10:06 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:05.162 12:10:06 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:05.162 12:10:06 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:05.162 12:10:06 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:05.162 12:10:06 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:05.162 12:10:06 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:05.162 12:10:06 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:05.162 12:10:06 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:05.162 12:10:06 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:05.162 12:10:06 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:05.162 12:10:06 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:05.162 12:10:06 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:05.162 12:10:06 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:05.162 12:10:06 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:05.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.162 --rc genhtml_branch_coverage=1 00:12:05.162 --rc genhtml_function_coverage=1 00:12:05.162 --rc genhtml_legend=1 00:12:05.162 --rc geninfo_all_blocks=1 00:12:05.162 --rc geninfo_unexecuted_blocks=1 00:12:05.162 00:12:05.162 ' 00:12:05.162 12:10:06 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:05.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.162 --rc genhtml_branch_coverage=1 00:12:05.162 --rc genhtml_function_coverage=1 00:12:05.162 --rc genhtml_legend=1 00:12:05.162 --rc geninfo_all_blocks=1 00:12:05.162 --rc geninfo_unexecuted_blocks=1 00:12:05.162 00:12:05.162 ' 00:12:05.162 12:10:06 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:05.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.162 --rc genhtml_branch_coverage=1 00:12:05.162 --rc genhtml_function_coverage=1 00:12:05.162 --rc genhtml_legend=1 00:12:05.162 --rc geninfo_all_blocks=1 00:12:05.162 --rc geninfo_unexecuted_blocks=1 00:12:05.162 00:12:05.162 ' 00:12:05.162 12:10:06 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:05.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.162 --rc genhtml_branch_coverage=1 00:12:05.162 --rc genhtml_function_coverage=1 00:12:05.162 --rc genhtml_legend=1 00:12:05.162 --rc geninfo_all_blocks=1 00:12:05.162 --rc geninfo_unexecuted_blocks=1 00:12:05.162 00:12:05.162 ' 00:12:05.162 12:10:06 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:12:05.162 12:10:06 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:12:05.162 12:10:06 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:05.162 12:10:06 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:12:05.162 12:10:06 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:05.162 12:10:06 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:05.162 12:10:06 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:05.162 12:10:06 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:12:05.162 12:10:06 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:12:05.162 12:10:06 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:12:05.162 12:10:06 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:05.162 12:10:06 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:12:05.162 12:10:06 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:05.162 12:10:06 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:05.162 12:10:06 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:05.162 12:10:06 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:05.162 12:10:06 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:05.162 12:10:06 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:05.162 12:10:06 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:05.162 12:10:06 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:05.162 12:10:06 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:05.162 12:10:06 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:05.162 12:10:06 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:05.162 12:10:06 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:05.162 12:10:06 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:05.162 12:10:06 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:05.162 12:10:06 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:12:05.162 12:10:06 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:12:05.162 12:10:06 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:05.162 12:10:06 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:05.162 12:10:06 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:12:05.162 12:10:06 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:12:05.162 12:10:06 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:12:05.162 12:10:06 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:05.162 12:10:06 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:12:05.162 12:10:06 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:12:05.162 12:10:06 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:12:05.162 12:10:06 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:05.162 12:10:06 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:05.162 12:10:06 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:12:05.162 12:10:06 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:12:05.162 12:10:06 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:12:05.162 12:10:06 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:12:05.162 12:10:06 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:12:05.162 12:10:06 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:12:05.162 12:10:06 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:12:05.162 12:10:06 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:12:05.162 12:10:06 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:12:05.162 12:10:06 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:12:05.162 12:10:06 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:12:05.162 12:10:06 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:12:05.162 12:10:06 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:12:05.162 12:10:06 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:12:05.162 12:10:06 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:12:05.162 12:10:06 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:12:05.162 12:10:06 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:12:05.163 12:10:06 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:05.163 12:10:06 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:12:05.163 12:10:06 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:12:05.163 12:10:06 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:12:05.163 12:10:06 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:12:05.163 12:10:06 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:12:05.163 12:10:06 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:12:05.163 12:10:06 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:05.163 12:10:06 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:12:05.163 12:10:06 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:12:05.163 12:10:06 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:12:05.163 12:10:06 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:12:05.163 12:10:06 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:12:05.163 12:10:06 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:12:05.163 12:10:06 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:12:05.163 12:10:06 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:12:05.163 12:10:06 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:12:05.163 12:10:06 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:12:05.163 12:10:06 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:12:05.163 12:10:06 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:12:05.163 12:10:06 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:12:05.163 12:10:06 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:12:05.163 12:10:06 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:12:05.163 12:10:06 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:12:05.163 12:10:06 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:12:05.163 12:10:06 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:12:05.163 12:10:06 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:12:05.163 12:10:06 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:12:05.163 12:10:06 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:05.163 12:10:06 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:12:05.163 12:10:06 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:12:05.163 12:10:06 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:12:05.163 12:10:06 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:12:05.163 12:10:06 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:12:05.163 12:10:06 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:12:05.163 12:10:06 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:12:05.163 12:10:06 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:12:05.163 12:10:06 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:12:05.163 12:10:06 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:12:05.163 12:10:06 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:12:05.163 12:10:06 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:05.163 12:10:06 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:12:05.163 12:10:06 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:12:05.163 12:10:06 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:12:05.163 12:10:06 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:12:05.163 12:10:06 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:12:05.163 12:10:06 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:12:05.163 12:10:06 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:12:05.163 12:10:06 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:12:05.163 12:10:06 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:12:05.163 12:10:06 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:12:05.163 12:10:06 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:12:05.163 12:10:06 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:05.163 12:10:06 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:05.163 12:10:06 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:05.163 12:10:06 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:05.163 12:10:06 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:05.163 12:10:06 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:05.163 12:10:06 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:12:05.163 12:10:06 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:05.163 #define SPDK_CONFIG_H 00:12:05.163 #define SPDK_CONFIG_AIO_FSDEV 1 00:12:05.163 #define SPDK_CONFIG_APPS 1 00:12:05.163 #define SPDK_CONFIG_ARCH native 00:12:05.163 #define SPDK_CONFIG_ASAN 1 00:12:05.163 #undef SPDK_CONFIG_AVAHI 00:12:05.163 #undef SPDK_CONFIG_CET 00:12:05.163 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:12:05.163 #define SPDK_CONFIG_COVERAGE 1 00:12:05.163 #define SPDK_CONFIG_CROSS_PREFIX 00:12:05.163 #undef SPDK_CONFIG_CRYPTO 00:12:05.163 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:05.163 #undef SPDK_CONFIG_CUSTOMOCF 00:12:05.163 #undef SPDK_CONFIG_DAOS 00:12:05.163 #define SPDK_CONFIG_DAOS_DIR 00:12:05.163 #define SPDK_CONFIG_DEBUG 1 00:12:05.163 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:05.163 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:12:05.163 #define SPDK_CONFIG_DPDK_INC_DIR 00:12:05.163 #define SPDK_CONFIG_DPDK_LIB_DIR 00:12:05.163 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:05.163 #undef SPDK_CONFIG_DPDK_UADK 00:12:05.163 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:05.163 #define SPDK_CONFIG_EXAMPLES 1 00:12:05.163 #undef SPDK_CONFIG_FC 00:12:05.163 #define SPDK_CONFIG_FC_PATH 00:12:05.163 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:05.163 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:05.163 #define SPDK_CONFIG_FSDEV 1 00:12:05.163 #undef SPDK_CONFIG_FUSE 00:12:05.163 #undef SPDK_CONFIG_FUZZER 00:12:05.163 #define SPDK_CONFIG_FUZZER_LIB 00:12:05.163 #undef SPDK_CONFIG_GOLANG 00:12:05.163 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:05.163 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:05.163 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:05.163 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:05.163 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:05.163 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:05.163 #undef SPDK_CONFIG_HAVE_LZ4 00:12:05.163 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:12:05.163 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:12:05.163 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:05.163 #define SPDK_CONFIG_IDXD 1 00:12:05.163 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:05.163 #undef SPDK_CONFIG_IPSEC_MB 00:12:05.163 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:05.163 #define SPDK_CONFIG_ISAL 1 00:12:05.163 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:05.163 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:05.163 #define SPDK_CONFIG_LIBDIR 00:12:05.163 #undef SPDK_CONFIG_LTO 00:12:05.163 #define SPDK_CONFIG_MAX_LCORES 128 00:12:05.163 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:12:05.163 #define SPDK_CONFIG_NVME_CUSE 1 00:12:05.163 #undef SPDK_CONFIG_OCF 00:12:05.163 #define SPDK_CONFIG_OCF_PATH 00:12:05.163 #define SPDK_CONFIG_OPENSSL_PATH 00:12:05.163 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:05.163 #define SPDK_CONFIG_PGO_DIR 00:12:05.163 #undef SPDK_CONFIG_PGO_USE 00:12:05.163 #define SPDK_CONFIG_PREFIX /usr/local 00:12:05.163 #undef SPDK_CONFIG_RAID5F 00:12:05.163 #undef SPDK_CONFIG_RBD 00:12:05.163 #define SPDK_CONFIG_RDMA 1 00:12:05.163 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:05.163 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:05.163 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:05.163 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:05.163 #define SPDK_CONFIG_SHARED 1 00:12:05.163 #undef SPDK_CONFIG_SMA 00:12:05.163 #define SPDK_CONFIG_TESTS 1 00:12:05.163 #undef SPDK_CONFIG_TSAN 00:12:05.163 #define SPDK_CONFIG_UBLK 1 00:12:05.163 #define SPDK_CONFIG_UBSAN 1 00:12:05.163 #undef SPDK_CONFIG_UNIT_TESTS 00:12:05.163 #undef SPDK_CONFIG_URING 00:12:05.163 #define SPDK_CONFIG_URING_PATH 00:12:05.163 #undef SPDK_CONFIG_URING_ZNS 00:12:05.163 #undef SPDK_CONFIG_USDT 00:12:05.163 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:05.163 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:05.163 #undef SPDK_CONFIG_VFIO_USER 00:12:05.163 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:05.163 #define SPDK_CONFIG_VHOST 1 00:12:05.163 #define SPDK_CONFIG_VIRTIO 1 00:12:05.163 #undef SPDK_CONFIG_VTUNE 00:12:05.163 #define SPDK_CONFIG_VTUNE_DIR 00:12:05.163 #define SPDK_CONFIG_WERROR 1 00:12:05.163 #define SPDK_CONFIG_WPDK_DIR 00:12:05.163 #define SPDK_CONFIG_XNVME 1 00:12:05.163 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:05.163 12:10:06 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:05.163 12:10:06 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:05.163 12:10:06 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:05.163 12:10:06 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:05.163 12:10:06 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:05.163 12:10:06 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:05.163 12:10:06 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.163 12:10:06 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.164 12:10:06 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.164 12:10:06 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:05.164 12:10:06 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:12:05.164 12:10:06 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:12:05.164 12:10:06 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:12:05.164 12:10:06 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:12:05.164 12:10:06 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:12:05.164 12:10:06 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:12:05.164 12:10:06 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:12:05.164 12:10:06 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:12:05.164 12:10:06 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:12:05.164 12:10:06 nvme_xnvme -- pm/common@68 -- # uname -s 00:12:05.164 12:10:06 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:12:05.164 12:10:06 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:05.164 12:10:06 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:05.164 12:10:06 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:05.164 12:10:06 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:05.164 12:10:06 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:05.164 12:10:06 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:05.164 12:10:06 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:12:05.164 12:10:06 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:05.164 12:10:06 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:05.164 12:10:06 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:05.164 12:10:06 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:05.164 12:10:06 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:12:05.164 12:10:06 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@58 -- # : 0 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:12:05.164 12:10:06 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 68833 ]] 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 68833 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.bS77nF 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.bS77nF/tests/xnvme /tmp/spdk.bS77nF 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13975650304 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5592924160 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:12:05.165 12:10:06 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6260621312 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265384960 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493362176 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506158080 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13975650304 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5592924160 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6265241600 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265389056 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=147456 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253064704 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253076992 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt/output 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91403231232 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=8299548672 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:12:05.166 * Looking for test storage... 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13975650304 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:05.166 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@1680 -- # set -o errtrace 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@1685 -- # true 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@1687 -- # xtrace_fd 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:05.166 12:10:06 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:05.424 12:10:06 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:05.424 12:10:06 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:05.424 12:10:06 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:05.424 12:10:06 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:05.424 12:10:06 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:05.424 12:10:06 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:05.424 12:10:06 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:05.424 12:10:06 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:05.424 12:10:06 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:05.424 12:10:06 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:05.424 12:10:06 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:05.424 12:10:06 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:05.424 12:10:06 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:05.424 12:10:06 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:05.424 12:10:06 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:05.424 12:10:06 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:05.424 12:10:06 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:05.424 12:10:06 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:05.424 12:10:06 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:05.424 12:10:06 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:05.424 12:10:06 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:05.424 12:10:06 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:05.424 12:10:06 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:05.424 12:10:06 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:05.424 12:10:06 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:05.424 12:10:06 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:05.424 12:10:06 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:05.424 12:10:06 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:05.424 12:10:06 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:05.424 12:10:06 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:05.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.424 --rc genhtml_branch_coverage=1 00:12:05.424 --rc genhtml_function_coverage=1 00:12:05.424 --rc genhtml_legend=1 00:12:05.424 --rc geninfo_all_blocks=1 00:12:05.424 --rc geninfo_unexecuted_blocks=1 00:12:05.424 00:12:05.424 ' 00:12:05.424 12:10:06 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:05.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.424 --rc genhtml_branch_coverage=1 00:12:05.424 --rc genhtml_function_coverage=1 00:12:05.424 --rc genhtml_legend=1 00:12:05.424 --rc geninfo_all_blocks=1 00:12:05.424 --rc geninfo_unexecuted_blocks=1 00:12:05.424 00:12:05.424 ' 00:12:05.424 12:10:06 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:05.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.424 --rc genhtml_branch_coverage=1 00:12:05.424 --rc genhtml_function_coverage=1 00:12:05.424 --rc genhtml_legend=1 00:12:05.424 --rc geninfo_all_blocks=1 00:12:05.424 --rc geninfo_unexecuted_blocks=1 00:12:05.424 00:12:05.424 ' 00:12:05.424 12:10:06 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:05.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.424 --rc genhtml_branch_coverage=1 00:12:05.424 --rc genhtml_function_coverage=1 00:12:05.424 --rc genhtml_legend=1 00:12:05.424 --rc geninfo_all_blocks=1 00:12:05.424 --rc geninfo_unexecuted_blocks=1 00:12:05.424 00:12:05.424 ' 00:12:05.424 12:10:06 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:05.424 12:10:06 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:05.424 12:10:06 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:05.424 12:10:06 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:05.424 12:10:06 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:05.424 12:10:06 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.424 12:10:06 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.424 12:10:06 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.424 12:10:06 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:05.424 12:10:06 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.424 12:10:06 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:12:05.424 12:10:06 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:12:05.424 12:10:06 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:12:05.424 12:10:06 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:12:05.424 12:10:06 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:12:05.424 12:10:06 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:12:05.424 12:10:06 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:12:05.425 12:10:06 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:12:05.425 12:10:06 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:12:05.425 12:10:06 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:12:05.425 12:10:06 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:12:05.425 12:10:06 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:12:05.425 12:10:06 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:12:05.425 12:10:06 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:12:05.425 12:10:06 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:12:05.425 12:10:06 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:12:05.425 12:10:06 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:12:05.425 12:10:06 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:12:05.425 12:10:06 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:12:05.425 12:10:06 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:12:05.425 12:10:06 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:12:05.425 12:10:06 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:05.682 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:05.682 Waiting for block devices as requested 00:12:05.682 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:05.940 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:05.940 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:05.940 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:11.197 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:11.197 12:10:11 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:12:11.456 12:10:12 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:12:11.456 12:10:12 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:12:11.456 12:10:12 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:12:11.456 12:10:12 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:12:11.456 12:10:12 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:12:11.456 12:10:12 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:12:11.456 12:10:12 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:12:11.456 No valid GPT data, bailing 00:12:11.456 12:10:12 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:12:11.456 12:10:12 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:12:11.456 12:10:12 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:12:11.456 12:10:12 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:12:11.456 12:10:12 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:12:11.456 12:10:12 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:12:11.456 12:10:12 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:12:11.456 12:10:12 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:12:11.456 12:10:12 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:12:11.456 12:10:12 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:11.456 12:10:12 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:12:11.456 12:10:12 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:12:11.456 12:10:12 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:12:11.456 12:10:12 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:11.456 12:10:12 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:12:11.456 12:10:12 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:12:11.456 12:10:12 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:11.456 12:10:12 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:11.456 12:10:12 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:11.456 12:10:12 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:11.456 ************************************ 00:12:11.456 START TEST xnvme_rpc 00:12:11.456 ************************************ 00:12:11.456 12:10:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:11.456 12:10:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:11.456 12:10:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:11.456 12:10:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:11.456 12:10:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:11.456 12:10:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69226 00:12:11.456 12:10:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69226 00:12:11.456 12:10:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69226 ']' 00:12:11.456 12:10:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:11.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:11.456 12:10:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:11.456 12:10:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:11.456 12:10:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:11.456 12:10:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.456 12:10:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:11.714 [2024-11-25 12:10:12.569256] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:12:11.714 [2024-11-25 12:10:12.569378] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69226 ] 00:12:11.714 [2024-11-25 12:10:12.725733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:11.971 [2024-11-25 12:10:12.825725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.538 12:10:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:12.538 12:10:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:12.538 12:10:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:12:12.538 12:10:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.538 12:10:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.538 xnvme_bdev 00:12:12.538 12:10:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.538 12:10:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:12.538 12:10:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:12.538 12:10:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.538 12:10:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.538 12:10:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:12.538 12:10:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.538 12:10:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:12.538 12:10:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:12.538 12:10:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:12.538 12:10:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.538 12:10:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.538 12:10:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:12.538 12:10:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.538 12:10:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:12.538 12:10:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:12.538 12:10:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:12.538 12:10:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:12.538 12:10:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.538 12:10:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.538 12:10:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.538 12:10:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:12:12.538 12:10:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:12.538 12:10:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:12.538 12:10:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:12.538 12:10:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.538 12:10:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.538 12:10:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.538 12:10:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:12:12.538 12:10:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:12.538 12:10:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.538 12:10:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.797 12:10:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.797 12:10:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69226 00:12:12.797 12:10:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69226 ']' 00:12:12.797 12:10:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69226 00:12:12.797 12:10:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:12.797 12:10:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:12.797 12:10:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69226 00:12:12.797 killing process with pid 69226 00:12:12.797 12:10:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:12.797 12:10:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:12.797 12:10:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69226' 00:12:12.797 12:10:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69226 00:12:12.797 12:10:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69226 00:12:14.173 00:12:14.173 real 0m2.644s 00:12:14.173 user 0m2.716s 00:12:14.173 sys 0m0.393s 00:12:14.173 12:10:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:14.173 12:10:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.173 ************************************ 00:12:14.173 END TEST xnvme_rpc 00:12:14.173 ************************************ 00:12:14.173 12:10:15 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:14.173 12:10:15 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:14.173 12:10:15 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:14.173 12:10:15 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:14.173 ************************************ 00:12:14.173 START TEST xnvme_bdevperf 00:12:14.173 ************************************ 00:12:14.173 12:10:15 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:14.173 12:10:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:14.173 12:10:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:12:14.173 12:10:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:14.173 12:10:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:14.173 12:10:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:14.173 12:10:15 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:14.173 12:10:15 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:14.173 { 00:12:14.173 "subsystems": [ 00:12:14.173 { 00:12:14.173 "subsystem": "bdev", 00:12:14.173 "config": [ 00:12:14.173 { 00:12:14.173 "params": { 00:12:14.173 "io_mechanism": "libaio", 00:12:14.173 "conserve_cpu": false, 00:12:14.173 "filename": "/dev/nvme0n1", 00:12:14.173 "name": "xnvme_bdev" 00:12:14.173 }, 00:12:14.173 "method": "bdev_xnvme_create" 00:12:14.173 }, 00:12:14.173 { 00:12:14.173 "method": "bdev_wait_for_examine" 00:12:14.173 } 00:12:14.173 ] 00:12:14.173 } 00:12:14.173 ] 00:12:14.173 } 00:12:14.173 [2024-11-25 12:10:15.236333] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:12:14.173 [2024-11-25 12:10:15.236571] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69294 ] 00:12:14.432 [2024-11-25 12:10:15.396224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.432 [2024-11-25 12:10:15.496016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.690 Running I/O for 5 seconds... 00:12:17.055 37458.00 IOPS, 146.32 MiB/s [2024-11-25T12:10:19.071Z] 34596.50 IOPS, 135.14 MiB/s [2024-11-25T12:10:20.020Z] 35201.67 IOPS, 137.51 MiB/s [2024-11-25T12:10:20.962Z] 36096.00 IOPS, 141.00 MiB/s [2024-11-25T12:10:20.962Z] 37172.60 IOPS, 145.21 MiB/s 00:12:19.882 Latency(us) 00:12:19.882 [2024-11-25T12:10:20.962Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:19.882 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:19.882 xnvme_bdev : 5.00 37164.21 145.17 0.00 0.00 1717.96 293.02 5570.56 00:12:19.882 [2024-11-25T12:10:20.962Z] =================================================================================================================== 00:12:19.882 [2024-11-25T12:10:20.962Z] Total : 37164.21 145.17 0.00 0.00 1717.96 293.02 5570.56 00:12:20.451 12:10:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:20.451 12:10:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:12:20.451 12:10:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:20.451 12:10:21 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:20.451 12:10:21 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:20.451 { 00:12:20.451 "subsystems": [ 00:12:20.451 { 00:12:20.451 "subsystem": "bdev", 00:12:20.451 "config": [ 00:12:20.451 { 00:12:20.451 "params": { 00:12:20.451 "io_mechanism": "libaio", 00:12:20.451 "conserve_cpu": false, 00:12:20.451 "filename": "/dev/nvme0n1", 00:12:20.451 "name": "xnvme_bdev" 00:12:20.451 }, 00:12:20.451 "method": "bdev_xnvme_create" 00:12:20.451 }, 00:12:20.451 { 00:12:20.451 "method": "bdev_wait_for_examine" 00:12:20.451 } 00:12:20.451 ] 00:12:20.451 } 00:12:20.451 ] 00:12:20.451 } 00:12:20.710 [2024-11-25 12:10:21.553477] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:12:20.710 [2024-11-25 12:10:21.553761] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69369 ] 00:12:20.710 [2024-11-25 12:10:21.724246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:20.970 [2024-11-25 12:10:21.829799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.232 Running I/O for 5 seconds... 00:12:23.110 34305.00 IOPS, 134.00 MiB/s [2024-11-25T12:10:25.130Z] 33729.00 IOPS, 131.75 MiB/s [2024-11-25T12:10:26.510Z] 33957.33 IOPS, 132.65 MiB/s [2024-11-25T12:10:27.450Z] 34501.75 IOPS, 134.77 MiB/s 00:12:26.370 Latency(us) 00:12:26.370 [2024-11-25T12:10:27.450Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:26.370 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:12:26.370 xnvme_bdev : 5.00 34463.66 134.62 0.00 0.00 1852.23 172.50 54041.99 00:12:26.370 [2024-11-25T12:10:27.450Z] =================================================================================================================== 00:12:26.370 [2024-11-25T12:10:27.450Z] Total : 34463.66 134.62 0.00 0.00 1852.23 172.50 54041.99 00:12:26.941 00:12:26.941 real 0m12.718s 00:12:26.941 user 0m4.751s 00:12:26.941 sys 0m5.947s 00:12:26.941 12:10:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:26.941 12:10:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:26.941 ************************************ 00:12:26.941 END TEST xnvme_bdevperf 00:12:26.941 ************************************ 00:12:26.941 12:10:27 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:12:26.941 12:10:27 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:26.941 12:10:27 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:26.941 12:10:27 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:26.941 ************************************ 00:12:26.941 START TEST xnvme_fio_plugin 00:12:26.941 ************************************ 00:12:26.941 12:10:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:12:26.941 12:10:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:12:26.941 12:10:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:12:26.941 12:10:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:26.941 12:10:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:26.941 12:10:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:26.941 12:10:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:26.941 12:10:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:26.941 12:10:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:26.941 12:10:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:26.941 12:10:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:26.941 12:10:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:26.941 12:10:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:26.941 12:10:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:26.941 12:10:27 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:26.941 12:10:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:26.941 12:10:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:26.941 12:10:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:26.941 12:10:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:26.941 12:10:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:26.941 12:10:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:26.941 12:10:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:26.941 12:10:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:26.941 12:10:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:26.941 { 00:12:26.941 "subsystems": [ 00:12:26.941 { 00:12:26.941 "subsystem": "bdev", 00:12:26.941 "config": [ 00:12:26.941 { 00:12:26.941 "params": { 00:12:26.941 "io_mechanism": "libaio", 00:12:26.941 "conserve_cpu": false, 00:12:26.941 "filename": "/dev/nvme0n1", 00:12:26.941 "name": "xnvme_bdev" 00:12:26.941 }, 00:12:26.941 "method": "bdev_xnvme_create" 00:12:26.941 }, 00:12:26.941 { 00:12:26.941 "method": "bdev_wait_for_examine" 00:12:26.941 } 00:12:26.941 ] 00:12:26.941 } 00:12:26.941 ] 00:12:26.941 } 00:12:27.200 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:27.200 fio-3.35 00:12:27.200 Starting 1 thread 00:12:33.789 00:12:33.789 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69490: Mon Nov 25 12:10:33 2024 00:12:33.789 read: IOPS=38.3k, BW=150MiB/s (157MB/s)(748MiB/5005msec) 00:12:33.789 slat (usec): min=4, max=1756, avg=20.28, stdev=73.91 00:12:33.789 clat (usec): min=61, max=12031, avg=1153.48, stdev=593.03 00:12:33.789 lat (usec): min=119, max=12037, avg=1173.76, stdev=590.59 00:12:33.789 clat percentiles (usec): 00:12:33.789 | 1.00th=[ 217], 5.00th=[ 363], 10.00th=[ 498], 20.00th=[ 668], 00:12:33.789 | 30.00th=[ 807], 40.00th=[ 947], 50.00th=[ 1074], 60.00th=[ 1205], 00:12:33.789 | 70.00th=[ 1369], 80.00th=[ 1565], 90.00th=[ 1893], 95.00th=[ 2245], 00:12:33.789 | 99.00th=[ 2966], 99.50th=[ 3294], 99.90th=[ 4424], 99.95th=[ 5538], 00:12:33.789 | 99.99th=[ 7570] 00:12:33.789 bw ( KiB/s): min=142208, max=167744, per=99.71%, avg=152654.22, stdev=7614.35, samples=9 00:12:33.789 iops : min=35552, max=41936, avg=38163.56, stdev=1903.59, samples=9 00:12:33.789 lat (usec) : 100=0.01%, 250=1.75%, 500=8.28%, 750=15.84%, 1000=18.49% 00:12:33.789 lat (msec) : 2=47.59%, 4=7.88%, 10=0.16%, 20=0.01% 00:12:33.789 cpu : usr=37.25%, sys=51.94%, ctx=10, majf=0, minf=764 00:12:33.789 IO depths : 1=0.1%, 2=0.5%, 4=1.8%, 8=6.6%, 16=23.2%, 32=65.5%, >=64=2.3% 00:12:33.789 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:33.789 complete : 0=0.0%, 4=97.8%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.7%, >=64=0.0% 00:12:33.789 issued rwts: total=191572,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:33.789 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:33.789 00:12:33.789 Run status group 0 (all jobs): 00:12:33.789 READ: bw=150MiB/s (157MB/s), 150MiB/s-150MiB/s (157MB/s-157MB/s), io=748MiB (785MB), run=5005-5005msec 00:12:33.789 ----------------------------------------------------- 00:12:33.789 Suppressions used: 00:12:33.789 count bytes template 00:12:33.789 1 11 /usr/src/fio/parse.c 00:12:33.789 1 8 libtcmalloc_minimal.so 00:12:33.789 1 904 libcrypto.so 00:12:33.789 ----------------------------------------------------- 00:12:33.789 00:12:33.789 12:10:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:33.789 12:10:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:33.789 12:10:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:33.789 12:10:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:33.789 12:10:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:33.789 12:10:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:33.789 12:10:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:33.789 12:10:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:33.789 12:10:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:33.789 12:10:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:33.789 12:10:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:33.789 12:10:34 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:33.789 12:10:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:33.789 12:10:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:33.789 12:10:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:33.789 12:10:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:33.789 12:10:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:33.789 12:10:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:33.789 12:10:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:33.789 12:10:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:33.789 12:10:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:33.789 { 00:12:33.789 "subsystems": [ 00:12:33.789 { 00:12:33.789 "subsystem": "bdev", 00:12:33.789 "config": [ 00:12:33.789 { 00:12:33.789 "params": { 00:12:33.789 "io_mechanism": "libaio", 00:12:33.789 "conserve_cpu": false, 00:12:33.789 "filename": "/dev/nvme0n1", 00:12:33.789 "name": "xnvme_bdev" 00:12:33.789 }, 00:12:33.789 "method": "bdev_xnvme_create" 00:12:33.789 }, 00:12:33.789 { 00:12:33.789 "method": "bdev_wait_for_examine" 00:12:33.789 } 00:12:33.789 ] 00:12:33.790 } 00:12:33.790 ] 00:12:33.790 } 00:12:34.049 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:34.049 fio-3.35 00:12:34.049 Starting 1 thread 00:12:40.634 00:12:40.634 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69580: Mon Nov 25 12:10:40 2024 00:12:40.634 write: IOPS=13.1k, BW=51.1MiB/s (53.5MB/s)(256MiB/5007msec); 0 zone resets 00:12:40.634 slat (usec): min=4, max=915, avg=13.22, stdev=33.09 00:12:40.634 clat (usec): min=6, max=59698, avg=4781.89, stdev=4474.35 00:12:40.634 lat (usec): min=47, max=59703, avg=4795.12, stdev=4472.60 00:12:40.634 clat percentiles (usec): 00:12:40.634 | 1.00th=[ 65], 5.00th=[ 172], 10.00th=[ 347], 20.00th=[ 652], 00:12:40.634 | 30.00th=[ 1598], 40.00th=[ 3687], 50.00th=[ 4621], 60.00th=[ 5473], 00:12:40.634 | 70.00th=[ 6325], 80.00th=[ 7570], 90.00th=[ 9372], 95.00th=[11469], 00:12:40.634 | 99.00th=[14615], 99.50th=[15533], 99.90th=[58459], 99.95th=[58983], 00:12:40.634 | 99.99th=[59507] 00:12:40.634 bw ( KiB/s): min=43104, max=61008, per=100.00%, avg=52299.20, stdev=6522.32, samples=10 00:12:40.634 iops : min=10776, max=15252, avg=13074.80, stdev=1630.58, samples=10 00:12:40.634 lat (usec) : 10=0.02%, 20=0.03%, 50=0.30%, 100=2.26%, 250=4.75% 00:12:40.634 lat (usec) : 500=8.78%, 750=6.21%, 1000=4.27% 00:12:40.634 lat (msec) : 2=4.34%, 4=12.09%, 10=48.70%, 20=7.96%, 50=0.05% 00:12:40.634 lat (msec) : 100=0.25% 00:12:40.634 cpu : usr=81.94%, sys=9.29%, ctx=19, majf=0, minf=764 00:12:40.634 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=1.7%, 32=81.5%, >=64=16.2% 00:12:40.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:40.634 complete : 0=0.0%, 4=93.9%, 8=2.8%, 16=2.3%, 32=0.8%, 64=0.2%, >=64=0.0% 00:12:40.634 issued rwts: total=0,65437,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:40.634 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:40.634 00:12:40.634 Run status group 0 (all jobs): 00:12:40.634 WRITE: bw=51.1MiB/s (53.5MB/s), 51.1MiB/s-51.1MiB/s (53.5MB/s-53.5MB/s), io=256MiB (268MB), run=5007-5007msec 00:12:40.634 ----------------------------------------------------- 00:12:40.634 Suppressions used: 00:12:40.634 count bytes template 00:12:40.634 1 11 /usr/src/fio/parse.c 00:12:40.634 1 8 libtcmalloc_minimal.so 00:12:40.634 1 904 libcrypto.so 00:12:40.634 ----------------------------------------------------- 00:12:40.634 00:12:40.634 ************************************ 00:12:40.634 END TEST xnvme_fio_plugin 00:12:40.634 ************************************ 00:12:40.634 00:12:40.634 real 0m13.632s 00:12:40.634 user 0m8.648s 00:12:40.634 sys 0m3.602s 00:12:40.634 12:10:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:40.634 12:10:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:40.634 12:10:41 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:40.634 12:10:41 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:12:40.634 12:10:41 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:12:40.634 12:10:41 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:40.634 12:10:41 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:40.634 12:10:41 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:40.634 12:10:41 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:40.634 ************************************ 00:12:40.634 START TEST xnvme_rpc 00:12:40.634 ************************************ 00:12:40.634 12:10:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:40.634 12:10:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:40.635 12:10:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:40.635 12:10:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:40.635 12:10:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:40.635 12:10:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69662 00:12:40.635 12:10:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69662 00:12:40.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:40.635 12:10:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69662 ']' 00:12:40.635 12:10:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:40.635 12:10:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:40.635 12:10:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:40.635 12:10:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:40.635 12:10:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.635 12:10:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:40.897 [2024-11-25 12:10:41.739605] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:12:40.897 [2024-11-25 12:10:41.739723] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69662 ] 00:12:40.897 [2024-11-25 12:10:41.900870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:41.202 [2024-11-25 12:10:42.004167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.775 12:10:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:41.775 12:10:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:41.775 12:10:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:12:41.775 12:10:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.775 12:10:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.775 xnvme_bdev 00:12:41.775 12:10:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.775 12:10:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:41.775 12:10:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:41.775 12:10:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:41.775 12:10:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.775 12:10:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.775 12:10:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.775 12:10:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:41.775 12:10:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:41.775 12:10:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:41.776 12:10:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:41.776 12:10:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.776 12:10:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.776 12:10:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.776 12:10:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:41.776 12:10:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:41.776 12:10:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:41.776 12:10:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:41.776 12:10:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.776 12:10:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.776 12:10:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.776 12:10:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:12:41.776 12:10:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:41.776 12:10:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:41.776 12:10:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:41.776 12:10:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.776 12:10:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.776 12:10:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.776 12:10:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:12:41.776 12:10:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:41.776 12:10:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.776 12:10:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.776 12:10:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.776 12:10:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69662 00:12:41.776 12:10:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69662 ']' 00:12:41.776 12:10:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69662 00:12:41.776 12:10:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:41.776 12:10:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:41.776 12:10:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69662 00:12:41.776 killing process with pid 69662 00:12:41.776 12:10:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:41.776 12:10:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:41.776 12:10:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69662' 00:12:41.776 12:10:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69662 00:12:41.776 12:10:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69662 00:12:43.689 ************************************ 00:12:43.689 END TEST xnvme_rpc 00:12:43.689 ************************************ 00:12:43.689 00:12:43.689 real 0m2.706s 00:12:43.689 user 0m2.772s 00:12:43.689 sys 0m0.377s 00:12:43.689 12:10:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:43.689 12:10:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.689 12:10:44 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:43.689 12:10:44 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:43.689 12:10:44 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:43.689 12:10:44 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:43.689 ************************************ 00:12:43.689 START TEST xnvme_bdevperf 00:12:43.689 ************************************ 00:12:43.689 12:10:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:43.689 12:10:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:43.689 12:10:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:12:43.689 12:10:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:43.689 12:10:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:43.689 12:10:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:43.689 12:10:44 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:43.689 12:10:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:43.689 { 00:12:43.689 "subsystems": [ 00:12:43.689 { 00:12:43.689 "subsystem": "bdev", 00:12:43.689 "config": [ 00:12:43.689 { 00:12:43.689 "params": { 00:12:43.689 "io_mechanism": "libaio", 00:12:43.689 "conserve_cpu": true, 00:12:43.689 "filename": "/dev/nvme0n1", 00:12:43.689 "name": "xnvme_bdev" 00:12:43.689 }, 00:12:43.689 "method": "bdev_xnvme_create" 00:12:43.689 }, 00:12:43.689 { 00:12:43.689 "method": "bdev_wait_for_examine" 00:12:43.689 } 00:12:43.689 ] 00:12:43.689 } 00:12:43.689 ] 00:12:43.689 } 00:12:43.689 [2024-11-25 12:10:44.481662] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:12:43.689 [2024-11-25 12:10:44.481992] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69736 ] 00:12:43.689 [2024-11-25 12:10:44.643428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:43.689 [2024-11-25 12:10:44.750103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.975 Running I/O for 5 seconds... 00:12:46.379 30770.00 IOPS, 120.20 MiB/s [2024-11-25T12:10:48.029Z] 30197.00 IOPS, 117.96 MiB/s [2024-11-25T12:10:49.412Z] 30198.33 IOPS, 117.96 MiB/s [2024-11-25T12:10:50.355Z] 31644.00 IOPS, 123.61 MiB/s [2024-11-25T12:10:50.355Z] 32032.80 IOPS, 125.13 MiB/s 00:12:49.275 Latency(us) 00:12:49.275 [2024-11-25T12:10:50.355Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:49.275 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:49.275 xnvme_bdev : 5.01 31998.72 124.99 0.00 0.00 1995.33 60.65 95178.44 00:12:49.275 [2024-11-25T12:10:50.355Z] =================================================================================================================== 00:12:49.275 [2024-11-25T12:10:50.355Z] Total : 31998.72 124.99 0.00 0.00 1995.33 60.65 95178.44 00:12:49.847 12:10:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:49.847 12:10:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:12:49.847 12:10:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:49.847 12:10:50 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:49.847 12:10:50 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:49.847 { 00:12:49.847 "subsystems": [ 00:12:49.847 { 00:12:49.847 "subsystem": "bdev", 00:12:49.847 "config": [ 00:12:49.847 { 00:12:49.847 "params": { 00:12:49.847 "io_mechanism": "libaio", 00:12:49.847 "conserve_cpu": true, 00:12:49.847 "filename": "/dev/nvme0n1", 00:12:49.847 "name": "xnvme_bdev" 00:12:49.847 }, 00:12:49.847 "method": "bdev_xnvme_create" 00:12:49.847 }, 00:12:49.847 { 00:12:49.847 "method": "bdev_wait_for_examine" 00:12:49.847 } 00:12:49.847 ] 00:12:49.847 } 00:12:49.847 ] 00:12:49.847 } 00:12:49.847 [2024-11-25 12:10:50.821524] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:12:49.847 [2024-11-25 12:10:50.821786] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69810 ] 00:12:50.108 [2024-11-25 12:10:50.984264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:50.108 [2024-11-25 12:10:51.088466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.369 Running I/O for 5 seconds... 00:12:52.699 14828.00 IOPS, 57.92 MiB/s [2024-11-25T12:10:54.723Z] 17623.00 IOPS, 68.84 MiB/s [2024-11-25T12:10:55.666Z] 13724.67 IOPS, 53.61 MiB/s [2024-11-25T12:10:56.609Z] 10505.50 IOPS, 41.04 MiB/s [2024-11-25T12:10:56.609Z] 9007.40 IOPS, 35.19 MiB/s 00:12:55.529 Latency(us) 00:12:55.529 [2024-11-25T12:10:56.609Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:55.529 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:12:55.529 xnvme_bdev : 5.01 9020.99 35.24 0.00 0.00 7088.35 83.10 771106.66 00:12:55.529 [2024-11-25T12:10:56.609Z] =================================================================================================================== 00:12:55.529 [2024-11-25T12:10:56.609Z] Total : 9020.99 35.24 0.00 0.00 7088.35 83.10 771106.66 00:12:56.100 00:12:56.100 real 0m12.674s 00:12:56.100 user 0m7.739s 00:12:56.100 sys 0m3.722s 00:12:56.100 ************************************ 00:12:56.100 END TEST xnvme_bdevperf 00:12:56.100 ************************************ 00:12:56.100 12:10:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:56.100 12:10:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:56.100 12:10:57 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:12:56.100 12:10:57 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:56.100 12:10:57 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:56.100 12:10:57 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:56.100 ************************************ 00:12:56.100 START TEST xnvme_fio_plugin 00:12:56.100 ************************************ 00:12:56.100 12:10:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:12:56.100 12:10:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:12:56.100 12:10:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:12:56.100 12:10:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:56.100 12:10:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:56.100 12:10:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:56.100 12:10:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:56.100 12:10:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:56.100 12:10:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:56.100 12:10:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:56.100 12:10:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:56.100 12:10:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:56.100 12:10:57 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:56.100 12:10:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:56.100 12:10:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:56.100 12:10:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:56.100 12:10:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:56.100 12:10:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:56.100 12:10:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:56.100 12:10:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:56.100 12:10:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:56.100 12:10:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:56.100 12:10:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:56.101 12:10:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:56.360 { 00:12:56.360 "subsystems": [ 00:12:56.360 { 00:12:56.360 "subsystem": "bdev", 00:12:56.360 "config": [ 00:12:56.360 { 00:12:56.360 "params": { 00:12:56.360 "io_mechanism": "libaio", 00:12:56.360 "conserve_cpu": true, 00:12:56.360 "filename": "/dev/nvme0n1", 00:12:56.360 "name": "xnvme_bdev" 00:12:56.360 }, 00:12:56.360 "method": "bdev_xnvme_create" 00:12:56.360 }, 00:12:56.360 { 00:12:56.360 "method": "bdev_wait_for_examine" 00:12:56.360 } 00:12:56.360 ] 00:12:56.360 } 00:12:56.360 ] 00:12:56.360 } 00:12:56.360 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:56.360 fio-3.35 00:12:56.360 Starting 1 thread 00:13:03.027 00:13:03.027 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69925: Mon Nov 25 12:11:03 2024 00:13:03.027 read: IOPS=38.2k, BW=149MiB/s (157MB/s)(747MiB/5001msec) 00:13:03.027 slat (usec): min=4, max=1606, avg=19.73, stdev=78.65 00:13:03.027 clat (usec): min=104, max=28325, avg=1132.16, stdev=529.70 00:13:03.027 lat (usec): min=163, max=28330, avg=1151.89, stdev=525.17 00:13:03.027 clat percentiles (usec): 00:13:03.027 | 1.00th=[ 219], 5.00th=[ 383], 10.00th=[ 553], 20.00th=[ 725], 00:13:03.027 | 30.00th=[ 857], 40.00th=[ 971], 50.00th=[ 1074], 60.00th=[ 1188], 00:13:03.027 | 70.00th=[ 1319], 80.00th=[ 1483], 90.00th=[ 1745], 95.00th=[ 2024], 00:13:03.027 | 99.00th=[ 2704], 99.50th=[ 3032], 99.90th=[ 3589], 99.95th=[ 3818], 00:13:03.027 | 99.99th=[ 7963] 00:13:03.027 bw ( KiB/s): min=137288, max=159080, per=99.70%, avg=152488.11, stdev=6789.25, samples=9 00:13:03.027 iops : min=34322, max=39770, avg=38122.00, stdev=1697.32, samples=9 00:13:03.027 lat (usec) : 250=1.72%, 500=6.48%, 750=13.41%, 1000=20.87% 00:13:03.027 lat (msec) : 2=52.23%, 4=5.25%, 10=0.03%, 20=0.01%, 50=0.01% 00:13:03.027 cpu : usr=39.22%, sys=51.66%, ctx=11, majf=0, minf=764 00:13:03.027 IO depths : 1=0.4%, 2=1.2%, 4=3.5%, 8=9.4%, 16=23.9%, 32=59.5%, >=64=2.0% 00:13:03.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:03.027 complete : 0=0.0%, 4=98.1%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.6%, >=64=0.0% 00:13:03.027 issued rwts: total=191224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:03.027 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:03.027 00:13:03.027 Run status group 0 (all jobs): 00:13:03.027 READ: bw=149MiB/s (157MB/s), 149MiB/s-149MiB/s (157MB/s-157MB/s), io=747MiB (783MB), run=5001-5001msec 00:13:03.027 ----------------------------------------------------- 00:13:03.027 Suppressions used: 00:13:03.027 count bytes template 00:13:03.027 1 11 /usr/src/fio/parse.c 00:13:03.027 1 8 libtcmalloc_minimal.so 00:13:03.027 1 904 libcrypto.so 00:13:03.027 ----------------------------------------------------- 00:13:03.027 00:13:03.027 12:11:03 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:03.027 12:11:03 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:03.027 12:11:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:03.027 12:11:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:03.027 12:11:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:03.027 12:11:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:03.027 12:11:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:03.027 12:11:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:03.027 12:11:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:03.027 12:11:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:03.027 12:11:03 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:03.027 12:11:03 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:03.027 12:11:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:03.027 12:11:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:03.027 12:11:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:03.027 12:11:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:03.027 12:11:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:03.027 12:11:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:03.027 12:11:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:03.027 12:11:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:03.027 12:11:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:03.027 { 00:13:03.027 "subsystems": [ 00:13:03.027 { 00:13:03.027 "subsystem": "bdev", 00:13:03.027 "config": [ 00:13:03.027 { 00:13:03.027 "params": { 00:13:03.027 "io_mechanism": "libaio", 00:13:03.027 "conserve_cpu": true, 00:13:03.027 "filename": "/dev/nvme0n1", 00:13:03.027 "name": "xnvme_bdev" 00:13:03.027 }, 00:13:03.027 "method": "bdev_xnvme_create" 00:13:03.027 }, 00:13:03.027 { 00:13:03.027 "method": "bdev_wait_for_examine" 00:13:03.027 } 00:13:03.027 ] 00:13:03.027 } 00:13:03.027 ] 00:13:03.027 } 00:13:03.286 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:03.286 fio-3.35 00:13:03.286 Starting 1 thread 00:13:09.866 00:13:09.866 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70017: Mon Nov 25 12:11:09 2024 00:13:09.866 write: IOPS=22.5k, BW=88.0MiB/s (92.3MB/s)(440MiB/5003msec); 0 zone resets 00:13:09.866 slat (usec): min=4, max=3072, avg=16.04, stdev=54.37 00:13:09.866 clat (usec): min=9, max=35489, avg=2469.57, stdev=3351.12 00:13:09.866 lat (usec): min=47, max=35624, avg=2485.61, stdev=3348.34 00:13:09.866 clat percentiles (usec): 00:13:09.866 | 1.00th=[ 129], 5.00th=[ 281], 10.00th=[ 396], 20.00th=[ 594], 00:13:09.866 | 30.00th=[ 750], 40.00th=[ 881], 50.00th=[ 1029], 60.00th=[ 1221], 00:13:09.866 | 70.00th=[ 1549], 80.00th=[ 3720], 90.00th=[ 7963], 95.00th=[10028], 00:13:09.866 | 99.00th=[14091], 99.50th=[15664], 99.90th=[18482], 99.95th=[30802], 00:13:09.866 | 99.99th=[33162] 00:13:09.866 bw ( KiB/s): min=39480, max=173528, per=99.03%, avg=89242.67, stdev=57677.85, samples=9 00:13:09.866 iops : min= 9870, max=43382, avg=22310.67, stdev=14419.46, samples=9 00:13:09.866 lat (usec) : 10=0.01%, 20=0.01%, 50=0.06%, 100=0.46%, 250=3.37% 00:13:09.866 lat (usec) : 500=11.24%, 750=14.83%, 1000=17.96% 00:13:09.866 lat (msec) : 2=27.64%, 4=4.98%, 10=14.28%, 20=5.10%, 50=0.06% 00:13:09.866 cpu : usr=68.79%, sys=22.43%, ctx=21, majf=0, minf=764 00:13:09.866 IO depths : 1=0.2%, 2=0.6%, 4=2.1%, 8=6.1%, 16=15.6%, 32=69.1%, >=64=6.4% 00:13:09.866 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:09.866 complete : 0=0.0%, 4=96.4%, 8=1.1%, 16=0.9%, 32=0.6%, 64=1.1%, >=64=0.0% 00:13:09.866 issued rwts: total=0,112718,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:09.866 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:09.866 00:13:09.866 Run status group 0 (all jobs): 00:13:09.866 WRITE: bw=88.0MiB/s (92.3MB/s), 88.0MiB/s-88.0MiB/s (92.3MB/s-92.3MB/s), io=440MiB (462MB), run=5003-5003msec 00:13:09.866 ----------------------------------------------------- 00:13:09.866 Suppressions used: 00:13:09.866 count bytes template 00:13:09.866 1 11 /usr/src/fio/parse.c 00:13:09.866 1 8 libtcmalloc_minimal.so 00:13:09.866 1 904 libcrypto.so 00:13:09.866 ----------------------------------------------------- 00:13:09.866 00:13:09.866 ************************************ 00:13:09.866 END TEST xnvme_fio_plugin 00:13:09.866 ************************************ 00:13:09.866 00:13:09.866 real 0m13.672s 00:13:09.866 user 0m8.128s 00:13:09.866 sys 0m4.233s 00:13:09.866 12:11:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:09.866 12:11:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:09.866 12:11:10 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:13:09.866 12:11:10 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:09.866 12:11:10 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:13:09.866 12:11:10 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:13:09.866 12:11:10 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:13:09.866 12:11:10 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:09.866 12:11:10 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:13:09.866 12:11:10 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:13:09.866 12:11:10 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:09.866 12:11:10 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:09.866 12:11:10 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:09.866 12:11:10 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:09.866 ************************************ 00:13:09.866 START TEST xnvme_rpc 00:13:09.866 ************************************ 00:13:09.866 12:11:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:09.866 12:11:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:09.866 12:11:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:09.866 12:11:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:09.866 12:11:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:09.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:09.866 12:11:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70103 00:13:09.866 12:11:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70103 00:13:09.866 12:11:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70103 ']' 00:13:09.866 12:11:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:09.866 12:11:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:09.866 12:11:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:09.866 12:11:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:09.866 12:11:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.866 12:11:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:10.128 [2024-11-25 12:11:10.957067] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:13:10.128 [2024-11-25 12:11:10.957200] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70103 ] 00:13:10.128 [2024-11-25 12:11:11.118172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:10.388 [2024-11-25 12:11:11.218553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:10.960 12:11:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:10.960 12:11:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:10.960 12:11:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:13:10.960 12:11:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.960 12:11:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.960 xnvme_bdev 00:13:10.960 12:11:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.960 12:11:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:10.960 12:11:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:10.960 12:11:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:10.960 12:11:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.960 12:11:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.960 12:11:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.960 12:11:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:10.960 12:11:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:10.960 12:11:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:10.960 12:11:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.960 12:11:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.960 12:11:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:10.960 12:11:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.960 12:11:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:10.960 12:11:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:10.960 12:11:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:10.960 12:11:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:10.960 12:11:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.960 12:11:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.960 12:11:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.960 12:11:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:13:10.960 12:11:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:10.960 12:11:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:10.960 12:11:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:10.960 12:11:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.960 12:11:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.960 12:11:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.960 12:11:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:13:10.960 12:11:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:10.960 12:11:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.960 12:11:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.960 12:11:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.960 12:11:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70103 00:13:10.960 12:11:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70103 ']' 00:13:10.960 12:11:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70103 00:13:10.960 12:11:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:10.961 12:11:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:10.961 12:11:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70103 00:13:10.961 killing process with pid 70103 00:13:10.961 12:11:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:10.961 12:11:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:10.961 12:11:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70103' 00:13:10.961 12:11:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70103 00:13:10.961 12:11:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70103 00:13:12.875 ************************************ 00:13:12.875 END TEST xnvme_rpc 00:13:12.875 ************************************ 00:13:12.875 00:13:12.875 real 0m2.669s 00:13:12.875 user 0m2.791s 00:13:12.875 sys 0m0.358s 00:13:12.875 12:11:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:12.875 12:11:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.875 12:11:13 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:12.875 12:11:13 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:12.875 12:11:13 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:12.875 12:11:13 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:12.875 ************************************ 00:13:12.875 START TEST xnvme_bdevperf 00:13:12.875 ************************************ 00:13:12.875 12:11:13 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:12.875 12:11:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:12.875 12:11:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:13:12.875 12:11:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:12.875 12:11:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:12.875 12:11:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:12.875 12:11:13 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:12.875 12:11:13 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:12.875 { 00:13:12.875 "subsystems": [ 00:13:12.875 { 00:13:12.875 "subsystem": "bdev", 00:13:12.875 "config": [ 00:13:12.875 { 00:13:12.875 "params": { 00:13:12.875 "io_mechanism": "io_uring", 00:13:12.875 "conserve_cpu": false, 00:13:12.875 "filename": "/dev/nvme0n1", 00:13:12.875 "name": "xnvme_bdev" 00:13:12.875 }, 00:13:12.875 "method": "bdev_xnvme_create" 00:13:12.875 }, 00:13:12.875 { 00:13:12.875 "method": "bdev_wait_for_examine" 00:13:12.875 } 00:13:12.875 ] 00:13:12.875 } 00:13:12.875 ] 00:13:12.875 } 00:13:12.875 [2024-11-25 12:11:13.676105] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:13:12.875 [2024-11-25 12:11:13.676213] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70166 ] 00:13:12.875 [2024-11-25 12:11:13.837037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:12.875 [2024-11-25 12:11:13.943390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:13.136 Running I/O for 5 seconds... 00:13:15.462 40029.00 IOPS, 156.36 MiB/s [2024-11-25T12:11:17.486Z] 37573.50 IOPS, 146.77 MiB/s [2024-11-25T12:11:18.429Z] 36435.67 IOPS, 142.33 MiB/s [2024-11-25T12:11:19.384Z] 35608.50 IOPS, 139.10 MiB/s [2024-11-25T12:11:19.384Z] 34871.60 IOPS, 136.22 MiB/s 00:13:18.305 Latency(us) 00:13:18.305 [2024-11-25T12:11:19.385Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:18.305 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:18.305 xnvme_bdev : 5.01 34831.83 136.06 0.00 0.00 1831.26 39.58 129862.10 00:13:18.305 [2024-11-25T12:11:19.385Z] =================================================================================================================== 00:13:18.305 [2024-11-25T12:11:19.385Z] Total : 34831.83 136.06 0.00 0.00 1831.26 39.58 129862.10 00:13:19.252 12:11:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:19.252 12:11:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:19.252 12:11:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:19.252 12:11:19 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:19.252 12:11:19 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:19.252 { 00:13:19.252 "subsystems": [ 00:13:19.252 { 00:13:19.252 "subsystem": "bdev", 00:13:19.252 "config": [ 00:13:19.252 { 00:13:19.252 "params": { 00:13:19.252 "io_mechanism": "io_uring", 00:13:19.252 "conserve_cpu": false, 00:13:19.252 "filename": "/dev/nvme0n1", 00:13:19.252 "name": "xnvme_bdev" 00:13:19.252 }, 00:13:19.252 "method": "bdev_xnvme_create" 00:13:19.252 }, 00:13:19.252 { 00:13:19.252 "method": "bdev_wait_for_examine" 00:13:19.252 } 00:13:19.252 ] 00:13:19.252 } 00:13:19.252 ] 00:13:19.252 } 00:13:19.252 [2024-11-25 12:11:20.057214] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:13:19.252 [2024-11-25 12:11:20.057352] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70248 ] 00:13:19.252 [2024-11-25 12:11:20.213991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:19.513 [2024-11-25 12:11:20.336443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:19.774 Running I/O for 5 seconds... 00:13:21.659 4349.00 IOPS, 16.99 MiB/s [2024-11-25T12:11:23.679Z] 4537.50 IOPS, 17.72 MiB/s [2024-11-25T12:11:24.626Z] 4674.33 IOPS, 18.26 MiB/s [2024-11-25T12:11:26.012Z] 5102.00 IOPS, 19.93 MiB/s 00:13:24.932 Latency(us) 00:13:24.932 [2024-11-25T12:11:26.012Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:24.932 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:24.932 xnvme_bdev : 5.00 9831.32 38.40 0.00 0.00 6504.11 73.65 606560.89 00:13:24.932 [2024-11-25T12:11:26.012Z] =================================================================================================================== 00:13:24.932 [2024-11-25T12:11:26.012Z] Total : 9831.32 38.40 0.00 0.00 6504.11 73.65 606560.89 00:13:25.506 ************************************ 00:13:25.506 END TEST xnvme_bdevperf 00:13:25.506 ************************************ 00:13:25.506 00:13:25.506 real 0m12.790s 00:13:25.506 user 0m6.077s 00:13:25.506 sys 0m6.438s 00:13:25.506 12:11:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:25.506 12:11:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:25.506 12:11:26 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:25.506 12:11:26 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:25.506 12:11:26 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:25.506 12:11:26 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:25.506 ************************************ 00:13:25.506 START TEST xnvme_fio_plugin 00:13:25.506 ************************************ 00:13:25.506 12:11:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:25.506 12:11:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:25.506 12:11:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:13:25.506 12:11:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:25.506 12:11:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:25.506 12:11:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:25.506 12:11:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:25.506 12:11:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:25.506 12:11:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:25.506 12:11:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:25.506 12:11:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:25.506 12:11:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:25.506 12:11:26 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:25.506 12:11:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:25.506 12:11:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:25.506 12:11:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:25.506 12:11:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:25.506 12:11:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:25.506 12:11:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:25.506 12:11:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:25.506 12:11:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:25.506 12:11:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:25.506 12:11:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:25.506 12:11:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:25.506 { 00:13:25.506 "subsystems": [ 00:13:25.506 { 00:13:25.506 "subsystem": "bdev", 00:13:25.506 "config": [ 00:13:25.506 { 00:13:25.506 "params": { 00:13:25.506 "io_mechanism": "io_uring", 00:13:25.506 "conserve_cpu": false, 00:13:25.506 "filename": "/dev/nvme0n1", 00:13:25.506 "name": "xnvme_bdev" 00:13:25.506 }, 00:13:25.506 "method": "bdev_xnvme_create" 00:13:25.506 }, 00:13:25.506 { 00:13:25.506 "method": "bdev_wait_for_examine" 00:13:25.506 } 00:13:25.506 ] 00:13:25.506 } 00:13:25.506 ] 00:13:25.506 } 00:13:25.767 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:25.767 fio-3.35 00:13:25.767 Starting 1 thread 00:13:32.356 00:13:32.356 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70367: Mon Nov 25 12:11:32 2024 00:13:32.356 read: IOPS=31.8k, BW=124MiB/s (130MB/s)(622MiB/5002msec) 00:13:32.356 slat (usec): min=2, max=390, avg= 4.45, stdev= 3.21 00:13:32.356 clat (usec): min=889, max=120751, avg=1825.95, stdev=479.92 00:13:32.356 lat (usec): min=893, max=120754, avg=1830.40, stdev=480.41 00:13:32.356 clat percentiles (usec): 00:13:32.356 | 1.00th=[ 1172], 5.00th=[ 1303], 10.00th=[ 1385], 20.00th=[ 1500], 00:13:32.356 | 30.00th=[ 1598], 40.00th=[ 1680], 50.00th=[ 1778], 60.00th=[ 1876], 00:13:32.356 | 70.00th=[ 1975], 80.00th=[ 2114], 90.00th=[ 2343], 95.00th=[ 2540], 00:13:32.356 | 99.00th=[ 2900], 99.50th=[ 3064], 99.90th=[ 3294], 99.95th=[ 3392], 00:13:32.356 | 99.99th=[ 3523] 00:13:32.356 bw ( KiB/s): min=120320, max=141312, per=100.00%, avg=127659.56, stdev=6311.77, samples=9 00:13:32.356 iops : min=30080, max=35328, avg=31914.89, stdev=1577.94, samples=9 00:13:32.356 lat (usec) : 1000=0.05% 00:13:32.356 lat (msec) : 2=71.44%, 4=28.51%, 250=0.01% 00:13:32.356 cpu : usr=32.59%, sys=65.39%, ctx=24, majf=0, minf=762 00:13:32.356 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.1%, >=64=1.6% 00:13:32.356 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:32.356 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:13:32.356 issued rwts: total=159158,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:32.356 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:32.356 00:13:32.356 Run status group 0 (all jobs): 00:13:32.356 READ: bw=124MiB/s (130MB/s), 124MiB/s-124MiB/s (130MB/s-130MB/s), io=622MiB (652MB), run=5002-5002msec 00:13:32.356 ----------------------------------------------------- 00:13:32.356 Suppressions used: 00:13:32.356 count bytes template 00:13:32.356 1 11 /usr/src/fio/parse.c 00:13:32.356 1 8 libtcmalloc_minimal.so 00:13:32.356 1 904 libcrypto.so 00:13:32.356 ----------------------------------------------------- 00:13:32.356 00:13:32.618 12:11:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:32.618 12:11:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:32.618 12:11:33 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:32.618 12:11:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:32.618 12:11:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:32.618 12:11:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:32.618 12:11:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:32.618 12:11:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:32.618 12:11:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:32.618 12:11:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:32.618 12:11:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:32.618 12:11:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:32.618 12:11:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:32.618 12:11:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:32.618 12:11:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:32.618 12:11:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:32.618 12:11:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:32.618 12:11:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:32.618 12:11:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:32.618 12:11:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:32.618 12:11:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:32.618 { 00:13:32.618 "subsystems": [ 00:13:32.618 { 00:13:32.618 "subsystem": "bdev", 00:13:32.618 "config": [ 00:13:32.618 { 00:13:32.618 "params": { 00:13:32.618 "io_mechanism": "io_uring", 00:13:32.619 "conserve_cpu": false, 00:13:32.619 "filename": "/dev/nvme0n1", 00:13:32.619 "name": "xnvme_bdev" 00:13:32.619 }, 00:13:32.619 "method": "bdev_xnvme_create" 00:13:32.619 }, 00:13:32.619 { 00:13:32.619 "method": "bdev_wait_for_examine" 00:13:32.619 } 00:13:32.619 ] 00:13:32.619 } 00:13:32.619 ] 00:13:32.619 } 00:13:32.619 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:32.619 fio-3.35 00:13:32.619 Starting 1 thread 00:13:39.311 00:13:39.311 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70459: Mon Nov 25 12:11:39 2024 00:13:39.311 write: IOPS=25.1k, BW=98.0MiB/s (103MB/s)(491MiB/5012msec); 0 zone resets 00:13:39.311 slat (usec): min=2, max=294, avg= 4.55, stdev= 3.37 00:13:39.311 clat (usec): min=76, max=71806, avg=2377.19, stdev=3049.15 00:13:39.311 lat (usec): min=80, max=71810, avg=2381.75, stdev=3049.17 00:13:39.311 clat percentiles (usec): 00:13:39.311 | 1.00th=[ 453], 5.00th=[ 1106], 10.00th=[ 1352], 20.00th=[ 1532], 00:13:39.311 | 30.00th=[ 1647], 40.00th=[ 1745], 50.00th=[ 1844], 60.00th=[ 1942], 00:13:39.311 | 70.00th=[ 2057], 80.00th=[ 2212], 90.00th=[ 2507], 95.00th=[ 2900], 00:13:39.311 | 99.00th=[16712], 99.50th=[18220], 99.90th=[21103], 99.95th=[65274], 00:13:39.311 | 99.99th=[70779] 00:13:39.311 bw ( KiB/s): min=34032, max=129104, per=100.00%, avg=100533.60, stdev=37384.86, samples=10 00:13:39.311 iops : min= 8508, max=32276, avg=25133.40, stdev=9346.22, samples=10 00:13:39.311 lat (usec) : 100=0.01%, 250=0.38%, 500=0.88%, 750=1.04%, 1000=1.99% 00:13:39.311 lat (msec) : 2=61.55%, 4=30.05%, 10=0.25%, 20=3.70%, 50=0.10% 00:13:39.311 lat (msec) : 100=0.05% 00:13:39.311 cpu : usr=29.77%, sys=68.21%, ctx=37, majf=0, minf=762 00:13:39.311 IO depths : 1=1.4%, 2=2.8%, 4=5.6%, 8=11.1%, 16=22.4%, 32=53.5%, >=64=3.3% 00:13:39.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:39.311 complete : 0=0.0%, 4=97.9%, 8=0.4%, 16=0.3%, 32=0.1%, 64=1.4%, >=64=0.0% 00:13:39.311 issued rwts: total=0,125730,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:39.311 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:39.311 00:13:39.311 Run status group 0 (all jobs): 00:13:39.311 WRITE: bw=98.0MiB/s (103MB/s), 98.0MiB/s-98.0MiB/s (103MB/s-103MB/s), io=491MiB (515MB), run=5012-5012msec 00:13:39.572 ----------------------------------------------------- 00:13:39.572 Suppressions used: 00:13:39.572 count bytes template 00:13:39.572 1 11 /usr/src/fio/parse.c 00:13:39.572 1 8 libtcmalloc_minimal.so 00:13:39.572 1 904 libcrypto.so 00:13:39.572 ----------------------------------------------------- 00:13:39.572 00:13:39.572 ************************************ 00:13:39.572 END TEST xnvme_fio_plugin 00:13:39.572 ************************************ 00:13:39.572 00:13:39.572 real 0m14.076s 00:13:39.572 user 0m6.259s 00:13:39.572 sys 0m7.290s 00:13:39.572 12:11:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:39.572 12:11:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:39.572 12:11:40 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:39.572 12:11:40 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:13:39.572 12:11:40 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:13:39.572 12:11:40 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:39.572 12:11:40 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:39.572 12:11:40 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:39.572 12:11:40 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:39.572 ************************************ 00:13:39.572 START TEST xnvme_rpc 00:13:39.572 ************************************ 00:13:39.572 12:11:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:39.572 12:11:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:39.572 12:11:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:39.572 12:11:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:39.572 12:11:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:39.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:39.572 12:11:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70545 00:13:39.572 12:11:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70545 00:13:39.572 12:11:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70545 ']' 00:13:39.572 12:11:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:39.572 12:11:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:39.572 12:11:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:39.572 12:11:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:39.572 12:11:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:39.572 12:11:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.833 [2024-11-25 12:11:40.722473] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:13:39.833 [2024-11-25 12:11:40.722908] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70545 ] 00:13:39.833 [2024-11-25 12:11:40.884594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:40.092 [2024-11-25 12:11:41.029405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:41.035 12:11:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:41.035 12:11:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:41.035 12:11:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:13:41.035 12:11:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.035 12:11:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:41.035 xnvme_bdev 00:13:41.035 12:11:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.035 12:11:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:41.035 12:11:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:41.035 12:11:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.035 12:11:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:41.035 12:11:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:41.035 12:11:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.035 12:11:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:41.035 12:11:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:41.035 12:11:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:41.035 12:11:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.035 12:11:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:41.035 12:11:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:41.035 12:11:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.035 12:11:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:41.035 12:11:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:41.035 12:11:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:41.035 12:11:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:41.035 12:11:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.035 12:11:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:41.035 12:11:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.035 12:11:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:13:41.035 12:11:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:41.035 12:11:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:41.035 12:11:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:41.035 12:11:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.035 12:11:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:41.035 12:11:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.035 12:11:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:13:41.035 12:11:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:41.035 12:11:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.035 12:11:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:41.035 12:11:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.035 12:11:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70545 00:13:41.035 12:11:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70545 ']' 00:13:41.035 12:11:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70545 00:13:41.035 12:11:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:41.035 12:11:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:41.035 12:11:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70545 00:13:41.035 killing process with pid 70545 00:13:41.035 12:11:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:41.035 12:11:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:41.035 12:11:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70545' 00:13:41.035 12:11:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70545 00:13:41.035 12:11:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70545 00:13:42.951 ************************************ 00:13:42.951 END TEST xnvme_rpc 00:13:42.951 ************************************ 00:13:42.951 00:13:42.951 real 0m3.118s 00:13:42.951 user 0m3.114s 00:13:42.951 sys 0m0.544s 00:13:42.951 12:11:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:42.951 12:11:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.952 12:11:43 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:42.952 12:11:43 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:42.952 12:11:43 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:42.952 12:11:43 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:42.952 ************************************ 00:13:42.952 START TEST xnvme_bdevperf 00:13:42.952 ************************************ 00:13:42.952 12:11:43 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:42.952 12:11:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:42.952 12:11:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:13:42.952 12:11:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:42.952 12:11:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:42.952 12:11:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:42.952 12:11:43 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:42.952 12:11:43 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:42.952 { 00:13:42.952 "subsystems": [ 00:13:42.952 { 00:13:42.952 "subsystem": "bdev", 00:13:42.952 "config": [ 00:13:42.952 { 00:13:42.952 "params": { 00:13:42.952 "io_mechanism": "io_uring", 00:13:42.952 "conserve_cpu": true, 00:13:42.952 "filename": "/dev/nvme0n1", 00:13:42.952 "name": "xnvme_bdev" 00:13:42.952 }, 00:13:42.952 "method": "bdev_xnvme_create" 00:13:42.952 }, 00:13:42.952 { 00:13:42.952 "method": "bdev_wait_for_examine" 00:13:42.952 } 00:13:42.952 ] 00:13:42.952 } 00:13:42.952 ] 00:13:42.952 } 00:13:42.952 [2024-11-25 12:11:43.878778] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:13:42.952 [2024-11-25 12:11:43.878975] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70619 ] 00:13:43.211 [2024-11-25 12:11:44.046497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.211 [2024-11-25 12:11:44.192805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:43.471 Running I/O for 5 seconds... 00:13:45.482 28823.00 IOPS, 112.59 MiB/s [2024-11-25T12:11:47.943Z] 29996.50 IOPS, 117.17 MiB/s [2024-11-25T12:11:48.599Z] 29673.33 IOPS, 115.91 MiB/s [2024-11-25T12:11:49.543Z] 29460.75 IOPS, 115.08 MiB/s 00:13:48.463 Latency(us) 00:13:48.463 [2024-11-25T12:11:49.543Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:48.463 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:48.463 xnvme_bdev : 5.00 29781.06 116.33 0.00 0.00 2143.87 371.79 26416.05 00:13:48.463 [2024-11-25T12:11:49.543Z] =================================================================================================================== 00:13:48.463 [2024-11-25T12:11:49.543Z] Total : 29781.06 116.33 0.00 0.00 2143.87 371.79 26416.05 00:13:49.406 12:11:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:49.406 12:11:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:49.406 12:11:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:49.406 12:11:50 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:49.406 12:11:50 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:49.406 { 00:13:49.406 "subsystems": [ 00:13:49.406 { 00:13:49.406 "subsystem": "bdev", 00:13:49.406 "config": [ 00:13:49.406 { 00:13:49.406 "params": { 00:13:49.406 "io_mechanism": "io_uring", 00:13:49.406 "conserve_cpu": true, 00:13:49.406 "filename": "/dev/nvme0n1", 00:13:49.406 "name": "xnvme_bdev" 00:13:49.406 }, 00:13:49.406 "method": "bdev_xnvme_create" 00:13:49.406 }, 00:13:49.406 { 00:13:49.406 "method": "bdev_wait_for_examine" 00:13:49.406 } 00:13:49.406 ] 00:13:49.406 } 00:13:49.406 ] 00:13:49.406 } 00:13:49.406 [2024-11-25 12:11:50.418236] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:13:49.406 [2024-11-25 12:11:50.418400] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70689 ] 00:13:49.667 [2024-11-25 12:11:50.581928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:49.667 [2024-11-25 12:11:50.726624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:50.240 Running I/O for 5 seconds... 00:13:52.122 5373.00 IOPS, 20.99 MiB/s [2024-11-25T12:11:54.149Z] 5272.00 IOPS, 20.59 MiB/s [2024-11-25T12:11:55.092Z] 5353.00 IOPS, 20.91 MiB/s [2024-11-25T12:11:56.036Z] 5467.50 IOPS, 21.36 MiB/s [2024-11-25T12:11:56.298Z] 5587.60 IOPS, 21.83 MiB/s 00:13:55.218 Latency(us) 00:13:55.218 [2024-11-25T12:11:56.298Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:55.218 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:55.218 xnvme_bdev : 5.02 5582.81 21.81 0.00 0.00 11439.01 73.26 40128.20 00:13:55.218 [2024-11-25T12:11:56.298Z] =================================================================================================================== 00:13:55.218 [2024-11-25T12:11:56.298Z] Total : 5582.81 21.81 0.00 0.00 11439.01 73.26 40128.20 00:13:55.795 ************************************ 00:13:55.795 END TEST xnvme_bdevperf 00:13:55.795 ************************************ 00:13:55.795 00:13:55.795 real 0m13.061s 00:13:55.795 user 0m9.183s 00:13:55.795 sys 0m2.959s 00:13:55.795 12:11:56 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:55.795 12:11:56 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:56.056 12:11:56 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:56.056 12:11:56 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:56.056 12:11:56 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:56.056 12:11:56 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:56.056 ************************************ 00:13:56.056 START TEST xnvme_fio_plugin 00:13:56.056 ************************************ 00:13:56.056 12:11:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:56.056 12:11:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:56.056 12:11:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:13:56.056 12:11:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:56.056 12:11:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:56.056 12:11:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:56.056 12:11:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:56.056 12:11:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:56.056 12:11:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:56.056 12:11:56 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:56.056 12:11:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:56.056 12:11:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:56.056 12:11:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:56.056 12:11:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:56.056 12:11:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:56.056 12:11:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:56.056 12:11:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:56.056 12:11:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:56.056 12:11:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:56.056 12:11:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:56.056 12:11:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:56.056 12:11:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:56.056 12:11:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:56.056 12:11:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:56.056 { 00:13:56.056 "subsystems": [ 00:13:56.056 { 00:13:56.056 "subsystem": "bdev", 00:13:56.056 "config": [ 00:13:56.056 { 00:13:56.056 "params": { 00:13:56.056 "io_mechanism": "io_uring", 00:13:56.056 "conserve_cpu": true, 00:13:56.056 "filename": "/dev/nvme0n1", 00:13:56.056 "name": "xnvme_bdev" 00:13:56.056 }, 00:13:56.056 "method": "bdev_xnvme_create" 00:13:56.056 }, 00:13:56.056 { 00:13:56.056 "method": "bdev_wait_for_examine" 00:13:56.056 } 00:13:56.056 ] 00:13:56.056 } 00:13:56.056 ] 00:13:56.056 } 00:13:56.318 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:56.318 fio-3.35 00:13:56.318 Starting 1 thread 00:14:02.899 00:14:02.899 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70814: Mon Nov 25 12:12:02 2024 00:14:02.899 read: IOPS=35.2k, BW=137MiB/s (144MB/s)(688MiB/5002msec) 00:14:02.899 slat (usec): min=2, max=161, avg= 4.90, stdev= 1.88 00:14:02.899 clat (usec): min=1047, max=4745, avg=1622.67, stdev=313.16 00:14:02.899 lat (usec): min=1052, max=4751, avg=1627.56, stdev=313.27 00:14:02.899 clat percentiles (usec): 00:14:02.899 | 1.00th=[ 1188], 5.00th=[ 1254], 10.00th=[ 1303], 20.00th=[ 1369], 00:14:02.899 | 30.00th=[ 1434], 40.00th=[ 1483], 50.00th=[ 1549], 60.00th=[ 1614], 00:14:02.900 | 70.00th=[ 1713], 80.00th=[ 1844], 90.00th=[ 2057], 95.00th=[ 2245], 00:14:02.900 | 99.00th=[ 2573], 99.50th=[ 2704], 99.90th=[ 3163], 99.95th=[ 4424], 00:14:02.900 | 99.99th=[ 4621] 00:14:02.900 bw ( KiB/s): min=126211, max=154624, per=100.00%, avg=142763.00, stdev=8464.70, samples=9 00:14:02.900 iops : min=31552, max=38656, avg=35690.67, stdev=2116.36, samples=9 00:14:02.900 lat (msec) : 2=87.86%, 4=12.07%, 10=0.07% 00:14:02.900 cpu : usr=40.37%, sys=56.27%, ctx=9, majf=0, minf=762 00:14:02.900 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:14:02.900 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.900 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:14:02.900 issued rwts: total=176064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.900 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:02.900 00:14:02.900 Run status group 0 (all jobs): 00:14:02.900 READ: bw=137MiB/s (144MB/s), 137MiB/s-137MiB/s (144MB/s-144MB/s), io=688MiB (721MB), run=5002-5002msec 00:14:02.900 ----------------------------------------------------- 00:14:02.900 Suppressions used: 00:14:02.900 count bytes template 00:14:02.900 1 11 /usr/src/fio/parse.c 00:14:02.900 1 8 libtcmalloc_minimal.so 00:14:02.900 1 904 libcrypto.so 00:14:02.900 ----------------------------------------------------- 00:14:02.900 00:14:02.900 12:12:03 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:02.900 12:12:03 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:02.900 12:12:03 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:02.900 12:12:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:02.900 12:12:03 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:02.900 12:12:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:02.900 12:12:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:02.900 12:12:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:02.900 12:12:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:02.900 12:12:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:02.900 12:12:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:02.900 12:12:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:02.900 12:12:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:02.900 12:12:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:02.900 12:12:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:02.900 12:12:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:02.900 12:12:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:02.900 12:12:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:02.900 12:12:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:02.900 12:12:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:02.900 12:12:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:02.900 { 00:14:02.900 "subsystems": [ 00:14:02.900 { 00:14:02.900 "subsystem": "bdev", 00:14:02.900 "config": [ 00:14:02.900 { 00:14:02.900 "params": { 00:14:02.900 "io_mechanism": "io_uring", 00:14:02.900 "conserve_cpu": true, 00:14:02.900 "filename": "/dev/nvme0n1", 00:14:02.900 "name": "xnvme_bdev" 00:14:02.900 }, 00:14:02.900 "method": "bdev_xnvme_create" 00:14:02.900 }, 00:14:02.900 { 00:14:02.900 "method": "bdev_wait_for_examine" 00:14:02.900 } 00:14:02.900 ] 00:14:02.900 } 00:14:02.900 ] 00:14:02.900 } 00:14:03.159 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:03.159 fio-3.35 00:14:03.159 Starting 1 thread 00:14:09.810 00:14:09.810 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70900: Mon Nov 25 12:12:09 2024 00:14:09.810 write: IOPS=11.9k, BW=46.6MiB/s (48.8MB/s)(234MiB/5018msec); 0 zone resets 00:14:09.810 slat (usec): min=2, max=365, avg= 5.12, stdev= 5.91 00:14:09.810 clat (usec): min=64, max=32367, avg=5310.83, stdev=6294.59 00:14:09.810 lat (usec): min=68, max=32372, avg=5315.95, stdev=6294.67 00:14:09.810 clat percentiles (usec): 00:14:09.810 | 1.00th=[ 163], 5.00th=[ 383], 10.00th=[ 537], 20.00th=[ 816], 00:14:09.810 | 30.00th=[ 938], 40.00th=[ 1074], 50.00th=[ 1254], 60.00th=[ 1729], 00:14:09.810 | 70.00th=[ 8979], 80.00th=[11469], 90.00th=[15270], 95.00th=[18482], 00:14:09.810 | 99.00th=[21627], 99.50th=[22414], 99.90th=[24249], 99.95th=[25822], 00:14:09.810 | 99.99th=[30278] 00:14:09.810 bw ( KiB/s): min=24008, max=64760, per=100.00%, avg=47808.00, stdev=16135.90, samples=10 00:14:09.810 iops : min= 6002, max=16190, avg=11952.00, stdev=4033.97, samples=10 00:14:09.810 lat (usec) : 100=0.11%, 250=2.40%, 500=6.36%, 750=7.61%, 1000=18.85% 00:14:09.810 lat (msec) : 2=26.83%, 4=1.66%, 10=9.64%, 20=23.93%, 50=2.62% 00:14:09.810 cpu : usr=78.43%, sys=11.48%, ctx=71, majf=0, minf=762 00:14:09.810 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=1.5%, 32=84.6%, >=64=13.8% 00:14:09.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:09.810 complete : 0=0.0%, 4=94.4%, 8=2.2%, 16=1.9%, 32=1.3%, 64=0.2%, >=64=0.0% 00:14:09.810 issued rwts: total=0,59820,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:09.810 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:09.810 00:14:09.810 Run status group 0 (all jobs): 00:14:09.810 WRITE: bw=46.6MiB/s (48.8MB/s), 46.6MiB/s-46.6MiB/s (48.8MB/s-48.8MB/s), io=234MiB (245MB), run=5018-5018msec 00:14:10.071 ----------------------------------------------------- 00:14:10.071 Suppressions used: 00:14:10.071 count bytes template 00:14:10.071 1 11 /usr/src/fio/parse.c 00:14:10.071 1 8 libtcmalloc_minimal.so 00:14:10.071 1 904 libcrypto.so 00:14:10.071 ----------------------------------------------------- 00:14:10.071 00:14:10.071 ************************************ 00:14:10.071 END TEST xnvme_fio_plugin 00:14:10.071 ************************************ 00:14:10.071 00:14:10.071 real 0m14.052s 00:14:10.071 user 0m8.983s 00:14:10.071 sys 0m4.080s 00:14:10.071 12:12:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:10.071 12:12:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:10.071 12:12:11 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:14:10.071 12:12:11 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:14:10.071 12:12:11 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:14:10.071 12:12:11 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:14:10.071 12:12:11 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:14:10.071 12:12:11 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:10.071 12:12:11 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:14:10.072 12:12:11 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:14:10.072 12:12:11 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:10.072 12:12:11 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:10.072 12:12:11 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:10.072 12:12:11 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:10.072 ************************************ 00:14:10.072 START TEST xnvme_rpc 00:14:10.072 ************************************ 00:14:10.072 12:12:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:10.072 12:12:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:10.072 12:12:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:10.072 12:12:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:10.072 12:12:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:10.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:10.072 12:12:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70992 00:14:10.072 12:12:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70992 00:14:10.072 12:12:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70992 ']' 00:14:10.072 12:12:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:10.072 12:12:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:10.072 12:12:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:10.072 12:12:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:10.072 12:12:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:10.072 12:12:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:10.331 [2024-11-25 12:12:11.162916] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:14:10.331 [2024-11-25 12:12:11.163439] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70992 ] 00:14:10.331 [2024-11-25 12:12:11.329191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.594 [2024-11-25 12:12:11.473602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.167 12:12:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:11.167 12:12:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:11.167 12:12:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:14:11.167 12:12:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.167 12:12:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.167 xnvme_bdev 00:14:11.167 12:12:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.167 12:12:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:11.428 12:12:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:11.428 12:12:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:11.428 12:12:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.428 12:12:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.428 12:12:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.428 12:12:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:11.428 12:12:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:11.428 12:12:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:11.428 12:12:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.428 12:12:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.428 12:12:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:11.428 12:12:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.428 12:12:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:14:11.428 12:12:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:11.428 12:12:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:11.428 12:12:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.428 12:12:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.428 12:12:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:11.428 12:12:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.428 12:12:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:14:11.428 12:12:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:11.428 12:12:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:11.428 12:12:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.428 12:12:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.428 12:12:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:11.428 12:12:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.428 12:12:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:14:11.428 12:12:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:11.428 12:12:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.428 12:12:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.428 12:12:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.428 12:12:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70992 00:14:11.428 12:12:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70992 ']' 00:14:11.428 12:12:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70992 00:14:11.428 12:12:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:11.428 12:12:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:11.428 12:12:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70992 00:14:11.428 12:12:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:11.428 12:12:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:11.428 killing process with pid 70992 00:14:11.428 12:12:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70992' 00:14:11.428 12:12:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70992 00:14:11.428 12:12:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70992 00:14:13.381 00:14:13.381 real 0m3.097s 00:14:13.381 user 0m3.095s 00:14:13.381 sys 0m0.512s 00:14:13.382 12:12:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:13.382 ************************************ 00:14:13.382 END TEST xnvme_rpc 00:14:13.382 ************************************ 00:14:13.382 12:12:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:13.382 12:12:14 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:13.382 12:12:14 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:13.382 12:12:14 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:13.382 12:12:14 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:13.382 ************************************ 00:14:13.382 START TEST xnvme_bdevperf 00:14:13.382 ************************************ 00:14:13.382 12:12:14 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:13.382 12:12:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:13.382 12:12:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:14:13.382 12:12:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:13.382 12:12:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:13.382 12:12:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:13.382 12:12:14 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:13.382 12:12:14 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:13.382 { 00:14:13.382 "subsystems": [ 00:14:13.382 { 00:14:13.382 "subsystem": "bdev", 00:14:13.382 "config": [ 00:14:13.382 { 00:14:13.382 "params": { 00:14:13.382 "io_mechanism": "io_uring_cmd", 00:14:13.382 "conserve_cpu": false, 00:14:13.382 "filename": "/dev/ng0n1", 00:14:13.382 "name": "xnvme_bdev" 00:14:13.382 }, 00:14:13.382 "method": "bdev_xnvme_create" 00:14:13.382 }, 00:14:13.382 { 00:14:13.382 "method": "bdev_wait_for_examine" 00:14:13.382 } 00:14:13.382 ] 00:14:13.382 } 00:14:13.382 ] 00:14:13.382 } 00:14:13.382 [2024-11-25 12:12:14.314333] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:14:13.382 [2024-11-25 12:12:14.314499] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71066 ] 00:14:13.641 [2024-11-25 12:12:14.480560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.641 [2024-11-25 12:12:14.620639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:13.902 Running I/O for 5 seconds... 00:14:16.232 28430.00 IOPS, 111.05 MiB/s [2024-11-25T12:12:18.257Z] 28554.00 IOPS, 111.54 MiB/s [2024-11-25T12:12:19.200Z] 27966.33 IOPS, 109.24 MiB/s [2024-11-25T12:12:20.139Z] 25432.25 IOPS, 99.34 MiB/s [2024-11-25T12:12:20.139Z] 23995.20 IOPS, 93.73 MiB/s 00:14:19.059 Latency(us) 00:14:19.059 [2024-11-25T12:12:20.140Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:19.060 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:19.060 xnvme_bdev : 5.04 23801.28 92.97 0.00 0.00 2663.77 99.25 327478.35 00:14:19.060 [2024-11-25T12:12:20.140Z] =================================================================================================================== 00:14:19.060 [2024-11-25T12:12:20.140Z] Total : 23801.28 92.97 0.00 0.00 2663.77 99.25 327478.35 00:14:20.042 12:12:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:20.042 12:12:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:20.042 12:12:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:20.042 12:12:20 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:20.042 12:12:20 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:20.042 { 00:14:20.042 "subsystems": [ 00:14:20.042 { 00:14:20.042 "subsystem": "bdev", 00:14:20.042 "config": [ 00:14:20.042 { 00:14:20.042 "params": { 00:14:20.042 "io_mechanism": "io_uring_cmd", 00:14:20.042 "conserve_cpu": false, 00:14:20.042 "filename": "/dev/ng0n1", 00:14:20.042 "name": "xnvme_bdev" 00:14:20.042 }, 00:14:20.042 "method": "bdev_xnvme_create" 00:14:20.042 }, 00:14:20.042 { 00:14:20.042 "method": "bdev_wait_for_examine" 00:14:20.042 } 00:14:20.042 ] 00:14:20.042 } 00:14:20.042 ] 00:14:20.042 } 00:14:20.042 [2024-11-25 12:12:20.882612] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:14:20.042 [2024-11-25 12:12:20.882779] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71140 ] 00:14:20.042 [2024-11-25 12:12:21.051972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:20.304 [2024-11-25 12:12:21.197840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:20.565 Running I/O for 5 seconds... 00:14:22.448 25499.00 IOPS, 99.61 MiB/s [2024-11-25T12:12:24.916Z] 24501.50 IOPS, 95.71 MiB/s [2024-11-25T12:12:25.857Z] 17251.67 IOPS, 67.39 MiB/s [2024-11-25T12:12:26.797Z] 13476.00 IOPS, 52.64 MiB/s [2024-11-25T12:12:26.797Z] 11195.40 IOPS, 43.73 MiB/s 00:14:25.717 Latency(us) 00:14:25.717 [2024-11-25T12:12:26.797Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:25.717 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:25.717 xnvme_bdev : 5.03 11135.39 43.50 0.00 0.00 5726.45 80.74 77836.60 00:14:25.717 [2024-11-25T12:12:26.797Z] =================================================================================================================== 00:14:25.717 [2024-11-25T12:12:26.797Z] Total : 11135.39 43.50 0.00 0.00 5726.45 80.74 77836.60 00:14:26.288 12:12:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:26.288 12:12:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:26.288 12:12:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:14:26.289 12:12:27 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:26.289 12:12:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:26.549 { 00:14:26.549 "subsystems": [ 00:14:26.549 { 00:14:26.549 "subsystem": "bdev", 00:14:26.549 "config": [ 00:14:26.549 { 00:14:26.549 "params": { 00:14:26.549 "io_mechanism": "io_uring_cmd", 00:14:26.549 "conserve_cpu": false, 00:14:26.549 "filename": "/dev/ng0n1", 00:14:26.549 "name": "xnvme_bdev" 00:14:26.549 }, 00:14:26.549 "method": "bdev_xnvme_create" 00:14:26.549 }, 00:14:26.549 { 00:14:26.549 "method": "bdev_wait_for_examine" 00:14:26.549 } 00:14:26.549 ] 00:14:26.549 } 00:14:26.549 ] 00:14:26.549 } 00:14:26.549 [2024-11-25 12:12:27.434835] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:14:26.549 [2024-11-25 12:12:27.435002] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71210 ] 00:14:26.549 [2024-11-25 12:12:27.603393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.808 [2024-11-25 12:12:27.745656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:27.069 Running I/O for 5 seconds... 00:14:29.403 64640.00 IOPS, 252.50 MiB/s [2024-11-25T12:12:31.057Z] 65312.00 IOPS, 255.12 MiB/s [2024-11-25T12:12:32.486Z] 65386.67 IOPS, 255.42 MiB/s [2024-11-25T12:12:33.056Z] 66704.00 IOPS, 260.56 MiB/s [2024-11-25T12:12:33.056Z] 67968.00 IOPS, 265.50 MiB/s 00:14:31.976 Latency(us) 00:14:31.976 [2024-11-25T12:12:33.056Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:31.976 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:14:31.976 xnvme_bdev : 5.00 67948.96 265.43 0.00 0.00 938.28 507.27 2986.93 00:14:31.976 [2024-11-25T12:12:33.056Z] =================================================================================================================== 00:14:31.976 [2024-11-25T12:12:33.056Z] Total : 67948.96 265.43 0.00 0.00 938.28 507.27 2986.93 00:14:32.921 12:12:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:32.921 12:12:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:14:32.921 12:12:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:32.921 12:12:33 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:32.921 12:12:33 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:32.921 { 00:14:32.921 "subsystems": [ 00:14:32.921 { 00:14:32.921 "subsystem": "bdev", 00:14:32.921 "config": [ 00:14:32.921 { 00:14:32.921 "params": { 00:14:32.921 "io_mechanism": "io_uring_cmd", 00:14:32.921 "conserve_cpu": false, 00:14:32.921 "filename": "/dev/ng0n1", 00:14:32.921 "name": "xnvme_bdev" 00:14:32.921 }, 00:14:32.921 "method": "bdev_xnvme_create" 00:14:32.921 }, 00:14:32.921 { 00:14:32.921 "method": "bdev_wait_for_examine" 00:14:32.921 } 00:14:32.921 ] 00:14:32.921 } 00:14:32.921 ] 00:14:32.921 } 00:14:32.921 [2024-11-25 12:12:33.927733] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:14:32.921 [2024-11-25 12:12:33.928282] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71290 ] 00:14:33.183 [2024-11-25 12:12:34.094522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:33.183 [2024-11-25 12:12:34.257793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:33.757 Running I/O for 5 seconds... 00:14:35.653 18186.00 IOPS, 71.04 MiB/s [2024-11-25T12:12:37.670Z] 24502.00 IOPS, 95.71 MiB/s [2024-11-25T12:12:38.612Z] 23250.33 IOPS, 90.82 MiB/s [2024-11-25T12:12:39.998Z] 22880.75 IOPS, 89.38 MiB/s [2024-11-25T12:12:39.998Z] 18848.60 IOPS, 73.63 MiB/s 00:14:38.918 Latency(us) 00:14:38.918 [2024-11-25T12:12:39.998Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:38.918 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:14:38.918 xnvme_bdev : 5.17 18235.56 71.23 0.00 0.00 3453.12 68.53 609787.27 00:14:38.918 [2024-11-25T12:12:39.998Z] =================================================================================================================== 00:14:38.918 [2024-11-25T12:12:39.998Z] Total : 18235.56 71.23 0.00 0.00 3453.12 68.53 609787.27 00:14:39.860 00:14:39.860 real 0m26.347s 00:14:39.860 user 0m14.498s 00:14:39.860 sys 0m11.284s 00:14:39.860 ************************************ 00:14:39.860 END TEST xnvme_bdevperf 00:14:39.860 ************************************ 00:14:39.860 12:12:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:39.860 12:12:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:39.860 12:12:40 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:39.860 12:12:40 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:39.860 12:12:40 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:39.860 12:12:40 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:39.860 ************************************ 00:14:39.860 START TEST xnvme_fio_plugin 00:14:39.860 ************************************ 00:14:39.860 12:12:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:39.860 12:12:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:39.860 12:12:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:14:39.860 12:12:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:39.860 12:12:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:39.860 12:12:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:39.860 12:12:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:39.860 12:12:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:39.860 12:12:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:39.860 12:12:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:39.860 12:12:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:39.860 12:12:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:39.860 12:12:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:39.860 12:12:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:39.860 12:12:40 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:39.860 12:12:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:39.860 12:12:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:39.860 12:12:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:39.860 12:12:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:39.860 12:12:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:39.860 12:12:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:39.860 12:12:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:39.860 12:12:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:39.860 12:12:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:39.860 { 00:14:39.860 "subsystems": [ 00:14:39.860 { 00:14:39.860 "subsystem": "bdev", 00:14:39.860 "config": [ 00:14:39.860 { 00:14:39.860 "params": { 00:14:39.860 "io_mechanism": "io_uring_cmd", 00:14:39.860 "conserve_cpu": false, 00:14:39.860 "filename": "/dev/ng0n1", 00:14:39.860 "name": "xnvme_bdev" 00:14:39.860 }, 00:14:39.860 "method": "bdev_xnvme_create" 00:14:39.860 }, 00:14:39.860 { 00:14:39.860 "method": "bdev_wait_for_examine" 00:14:39.860 } 00:14:39.860 ] 00:14:39.860 } 00:14:39.860 ] 00:14:39.860 } 00:14:39.860 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:39.860 fio-3.35 00:14:39.860 Starting 1 thread 00:14:46.483 00:14:46.483 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71408: Mon Nov 25 12:12:46 2024 00:14:46.483 read: IOPS=33.9k, BW=132MiB/s (139MB/s)(664MiB/5008msec) 00:14:46.483 slat (usec): min=2, max=133, avg= 3.85, stdev= 2.39 00:14:46.483 clat (usec): min=53, max=37728, avg=1759.65, stdev=1060.82 00:14:46.483 lat (usec): min=56, max=37732, avg=1763.50, stdev=1060.91 00:14:46.483 clat percentiles (usec): 00:14:46.483 | 1.00th=[ 685], 5.00th=[ 955], 10.00th=[ 1074], 20.00th=[ 1237], 00:14:46.483 | 30.00th=[ 1352], 40.00th=[ 1467], 50.00th=[ 1565], 60.00th=[ 1680], 00:14:46.483 | 70.00th=[ 1827], 80.00th=[ 2040], 90.00th=[ 2540], 95.00th=[ 3064], 00:14:46.483 | 99.00th=[ 5211], 99.50th=[ 7308], 99.90th=[13566], 99.95th=[16319], 00:14:46.483 | 99.99th=[29754] 00:14:46.483 bw ( KiB/s): min=118480, max=153524, per=100.00%, avg=135798.80, stdev=10829.42, samples=10 00:14:46.483 iops : min=29620, max=38381, avg=33949.70, stdev=2707.36, samples=10 00:14:46.483 lat (usec) : 100=0.01%, 250=0.05%, 500=0.35%, 750=1.05%, 1000=5.07% 00:14:46.483 lat (msec) : 2=72.21%, 4=19.27%, 10=1.75%, 20=0.20%, 50=0.04% 00:14:46.483 cpu : usr=37.89%, sys=60.85%, ctx=25, majf=0, minf=762 00:14:46.483 IO depths : 1=0.3%, 2=1.1%, 4=2.8%, 8=7.6%, 16=21.8%, 32=63.6%, >=64=2.8% 00:14:46.483 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:46.483 complete : 0=0.0%, 4=97.7%, 8=0.2%, 16=0.2%, 32=0.4%, 64=1.5%, >=64=0.0% 00:14:46.483 issued rwts: total=169864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:46.483 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:46.483 00:14:46.483 Run status group 0 (all jobs): 00:14:46.483 READ: bw=132MiB/s (139MB/s), 132MiB/s-132MiB/s (139MB/s-139MB/s), io=664MiB (696MB), run=5008-5008msec 00:14:46.745 ----------------------------------------------------- 00:14:46.745 Suppressions used: 00:14:46.745 count bytes template 00:14:46.745 1 11 /usr/src/fio/parse.c 00:14:46.745 1 8 libtcmalloc_minimal.so 00:14:46.745 1 904 libcrypto.so 00:14:46.745 ----------------------------------------------------- 00:14:46.745 00:14:46.745 12:12:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:46.745 12:12:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:46.745 12:12:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:46.745 12:12:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:46.745 12:12:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:46.745 12:12:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:46.745 12:12:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:46.745 12:12:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:46.745 12:12:47 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:46.745 12:12:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:46.745 12:12:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:46.745 12:12:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:46.745 12:12:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:46.745 12:12:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:46.745 12:12:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:46.745 12:12:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:46.745 12:12:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:46.745 12:12:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:46.745 12:12:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:46.745 12:12:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:46.745 12:12:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:46.745 { 00:14:46.745 "subsystems": [ 00:14:46.745 { 00:14:46.745 "subsystem": "bdev", 00:14:46.745 "config": [ 00:14:46.745 { 00:14:46.745 "params": { 00:14:46.745 "io_mechanism": "io_uring_cmd", 00:14:46.745 "conserve_cpu": false, 00:14:46.745 "filename": "/dev/ng0n1", 00:14:46.745 "name": "xnvme_bdev" 00:14:46.745 }, 00:14:46.745 "method": "bdev_xnvme_create" 00:14:46.745 }, 00:14:46.745 { 00:14:46.745 "method": "bdev_wait_for_examine" 00:14:46.745 } 00:14:46.745 ] 00:14:46.745 } 00:14:46.745 ] 00:14:46.745 } 00:14:47.007 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:47.007 fio-3.35 00:14:47.007 Starting 1 thread 00:14:53.683 00:14:53.683 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71501: Mon Nov 25 12:12:53 2024 00:14:53.683 write: IOPS=12.2k, BW=47.7MiB/s (50.0MB/s)(239MiB/5010msec); 0 zone resets 00:14:53.683 slat (nsec): min=2783, max=73477, avg=3906.07, stdev=2352.33 00:14:53.683 clat (usec): min=50, max=80351, avg=5188.25, stdev=6377.40 00:14:53.683 lat (usec): min=53, max=80365, avg=5192.15, stdev=6377.42 00:14:53.683 clat percentiles (usec): 00:14:53.683 | 1.00th=[ 97], 5.00th=[ 233], 10.00th=[ 404], 20.00th=[ 717], 00:14:53.683 | 30.00th=[ 971], 40.00th=[ 1319], 50.00th=[ 1647], 60.00th=[ 5866], 00:14:53.683 | 70.00th=[ 8291], 80.00th=[10159], 90.00th=[12387], 95.00th=[14615], 00:14:53.683 | 99.00th=[19792], 99.50th=[33817], 99.90th=[71828], 99.95th=[77071], 00:14:53.683 | 99.99th=[79168] 00:14:53.683 bw ( KiB/s): min=38816, max=82760, per=100.00%, avg=48846.40, stdev=13955.82, samples=10 00:14:53.683 iops : min= 9704, max=20690, avg=12211.60, stdev=3488.96, samples=10 00:14:53.683 lat (usec) : 100=1.09%, 250=4.24%, 500=8.71%, 750=7.28%, 1000=9.37% 00:14:53.683 lat (msec) : 2=25.83%, 4=0.99%, 10=21.96%, 20=19.54%, 50=0.75% 00:14:53.683 lat (msec) : 100=0.25% 00:14:53.683 cpu : usr=33.88%, sys=65.34%, ctx=27, majf=0, minf=762 00:14:53.683 IO depths : 1=0.4%, 2=0.7%, 4=1.4%, 8=2.8%, 16=5.9%, 32=74.2%, >=64=14.6% 00:14:53.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:53.683 complete : 0=0.0%, 4=94.7%, 8=2.5%, 16=1.9%, 32=0.6%, 64=0.3%, >=64=0.0% 00:14:53.683 issued rwts: total=0,61121,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:53.683 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:53.683 00:14:53.683 Run status group 0 (all jobs): 00:14:53.683 WRITE: bw=47.7MiB/s (50.0MB/s), 47.7MiB/s-47.7MiB/s (50.0MB/s-50.0MB/s), io=239MiB (250MB), run=5010-5010msec 00:14:53.683 ----------------------------------------------------- 00:14:53.683 Suppressions used: 00:14:53.683 count bytes template 00:14:53.683 1 11 /usr/src/fio/parse.c 00:14:53.683 1 8 libtcmalloc_minimal.so 00:14:53.683 1 904 libcrypto.so 00:14:53.683 ----------------------------------------------------- 00:14:53.683 00:14:53.683 00:14:53.683 real 0m13.788s 00:14:53.683 user 0m6.453s 00:14:53.683 sys 0m6.927s 00:14:53.683 12:12:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:53.683 ************************************ 00:14:53.683 12:12:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:53.683 END TEST xnvme_fio_plugin 00:14:53.683 ************************************ 00:14:53.683 12:12:54 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:53.683 12:12:54 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:14:53.683 12:12:54 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:14:53.683 12:12:54 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:53.683 12:12:54 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:53.683 12:12:54 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:53.683 12:12:54 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:53.683 ************************************ 00:14:53.684 START TEST xnvme_rpc 00:14:53.684 ************************************ 00:14:53.684 12:12:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:53.684 12:12:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:53.684 12:12:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:53.684 12:12:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:53.684 12:12:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:53.684 12:12:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71586 00:14:53.684 12:12:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71586 00:14:53.684 12:12:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71586 ']' 00:14:53.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:53.684 12:12:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:53.684 12:12:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:53.684 12:12:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:53.684 12:12:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:53.684 12:12:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:53.684 12:12:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:53.684 [2024-11-25 12:12:54.563700] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:14:53.684 [2024-11-25 12:12:54.563825] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71586 ] 00:14:53.684 [2024-11-25 12:12:54.725465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:53.945 [2024-11-25 12:12:54.838918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.524 12:12:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:54.524 12:12:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:54.524 12:12:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:14:54.524 12:12:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.524 12:12:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:54.524 xnvme_bdev 00:14:54.524 12:12:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.524 12:12:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:54.524 12:12:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:54.524 12:12:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.524 12:12:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:54.524 12:12:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:54.524 12:12:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.524 12:12:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:54.524 12:12:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:54.524 12:12:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:54.524 12:12:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:54.524 12:12:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.524 12:12:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:54.524 12:12:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.524 12:12:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:14:54.524 12:12:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:54.524 12:12:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:54.524 12:12:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:54.524 12:12:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.524 12:12:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:54.524 12:12:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.524 12:12:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:14:54.524 12:12:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:54.524 12:12:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:54.524 12:12:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.524 12:12:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:54.524 12:12:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:54.524 12:12:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.524 12:12:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:14:54.524 12:12:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:54.524 12:12:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.524 12:12:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:54.789 12:12:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.789 12:12:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71586 00:14:54.789 12:12:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71586 ']' 00:14:54.789 12:12:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71586 00:14:54.789 12:12:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:54.789 12:12:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:54.789 12:12:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71586 00:14:54.789 killing process with pid 71586 00:14:54.789 12:12:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:54.789 12:12:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:54.789 12:12:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71586' 00:14:54.789 12:12:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71586 00:14:54.789 12:12:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71586 00:14:56.170 00:14:56.170 real 0m2.681s 00:14:56.170 user 0m2.778s 00:14:56.170 sys 0m0.387s 00:14:56.170 ************************************ 00:14:56.171 END TEST xnvme_rpc 00:14:56.171 ************************************ 00:14:56.171 12:12:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:56.171 12:12:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:56.171 12:12:57 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:56.171 12:12:57 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:56.171 12:12:57 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:56.171 12:12:57 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:56.171 ************************************ 00:14:56.171 START TEST xnvme_bdevperf 00:14:56.171 ************************************ 00:14:56.171 12:12:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:56.171 12:12:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:56.171 12:12:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:14:56.171 12:12:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:56.171 12:12:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:56.171 12:12:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:56.171 12:12:57 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:56.171 12:12:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:56.430 { 00:14:56.430 "subsystems": [ 00:14:56.430 { 00:14:56.430 "subsystem": "bdev", 00:14:56.430 "config": [ 00:14:56.430 { 00:14:56.430 "params": { 00:14:56.430 "io_mechanism": "io_uring_cmd", 00:14:56.430 "conserve_cpu": true, 00:14:56.430 "filename": "/dev/ng0n1", 00:14:56.430 "name": "xnvme_bdev" 00:14:56.430 }, 00:14:56.430 "method": "bdev_xnvme_create" 00:14:56.430 }, 00:14:56.430 { 00:14:56.430 "method": "bdev_wait_for_examine" 00:14:56.430 } 00:14:56.430 ] 00:14:56.430 } 00:14:56.430 ] 00:14:56.430 } 00:14:56.430 [2024-11-25 12:12:57.297440] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:14:56.430 [2024-11-25 12:12:57.297564] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71655 ] 00:14:56.430 [2024-11-25 12:12:57.460395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:56.691 [2024-11-25 12:12:57.591346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:56.951 Running I/O for 5 seconds... 00:14:58.901 41798.00 IOPS, 163.27 MiB/s [2024-11-25T12:13:00.922Z] 42798.50 IOPS, 167.18 MiB/s [2024-11-25T12:13:01.866Z] 40997.00 IOPS, 160.14 MiB/s [2024-11-25T12:13:03.267Z] 40398.25 IOPS, 157.81 MiB/s [2024-11-25T12:13:03.267Z] 40448.60 IOPS, 158.00 MiB/s 00:15:02.187 Latency(us) 00:15:02.187 [2024-11-25T12:13:03.267Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:02.187 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:15:02.187 xnvme_bdev : 5.01 40390.39 157.77 0.00 0.00 1580.15 226.86 16938.54 00:15:02.187 [2024-11-25T12:13:03.267Z] =================================================================================================================== 00:15:02.187 [2024-11-25T12:13:03.267Z] Total : 40390.39 157.77 0.00 0.00 1580.15 226.86 16938.54 00:15:02.759 12:13:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:02.759 12:13:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:15:02.759 12:13:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:02.759 12:13:03 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:02.759 12:13:03 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:02.759 { 00:15:02.759 "subsystems": [ 00:15:02.759 { 00:15:02.759 "subsystem": "bdev", 00:15:02.759 "config": [ 00:15:02.759 { 00:15:02.759 "params": { 00:15:02.759 "io_mechanism": "io_uring_cmd", 00:15:02.759 "conserve_cpu": true, 00:15:02.759 "filename": "/dev/ng0n1", 00:15:02.759 "name": "xnvme_bdev" 00:15:02.759 }, 00:15:02.759 "method": "bdev_xnvme_create" 00:15:02.759 }, 00:15:02.759 { 00:15:02.759 "method": "bdev_wait_for_examine" 00:15:02.759 } 00:15:02.759 ] 00:15:02.759 } 00:15:02.759 ] 00:15:02.759 } 00:15:02.759 [2024-11-25 12:13:03.644896] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:15:02.759 [2024-11-25 12:13:03.645211] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71729 ] 00:15:02.759 [2024-11-25 12:13:03.805815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:03.020 [2024-11-25 12:13:03.908473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:03.281 Running I/O for 5 seconds... 00:15:05.172 40196.00 IOPS, 157.02 MiB/s [2024-11-25T12:13:07.193Z] 39953.50 IOPS, 156.07 MiB/s [2024-11-25T12:13:08.579Z] 38663.33 IOPS, 151.03 MiB/s [2024-11-25T12:13:09.521Z] 38060.00 IOPS, 148.67 MiB/s 00:15:08.441 Latency(us) 00:15:08.441 [2024-11-25T12:13:09.521Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:08.441 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:15:08.441 xnvme_bdev : 5.00 37767.04 147.53 0.00 0.00 1689.23 146.51 120182.94 00:15:08.441 [2024-11-25T12:13:09.521Z] =================================================================================================================== 00:15:08.441 [2024-11-25T12:13:09.521Z] Total : 37767.04 147.53 0.00 0.00 1689.23 146.51 120182.94 00:15:09.016 12:13:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:09.016 12:13:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:09.016 12:13:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:15:09.016 12:13:09 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:09.017 12:13:09 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:09.017 { 00:15:09.017 "subsystems": [ 00:15:09.017 { 00:15:09.017 "subsystem": "bdev", 00:15:09.017 "config": [ 00:15:09.017 { 00:15:09.017 "params": { 00:15:09.017 "io_mechanism": "io_uring_cmd", 00:15:09.017 "conserve_cpu": true, 00:15:09.017 "filename": "/dev/ng0n1", 00:15:09.017 "name": "xnvme_bdev" 00:15:09.017 }, 00:15:09.017 "method": "bdev_xnvme_create" 00:15:09.017 }, 00:15:09.017 { 00:15:09.017 "method": "bdev_wait_for_examine" 00:15:09.017 } 00:15:09.017 ] 00:15:09.017 } 00:15:09.017 ] 00:15:09.017 } 00:15:09.017 [2024-11-25 12:13:09.929943] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:15:09.017 [2024-11-25 12:13:09.930066] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71802 ] 00:15:09.017 [2024-11-25 12:13:10.092140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:09.278 [2024-11-25 12:13:10.196712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.538 Running I/O for 5 seconds... 00:15:11.418 76032.00 IOPS, 297.00 MiB/s [2024-11-25T12:13:13.502Z] 75616.00 IOPS, 295.38 MiB/s [2024-11-25T12:13:14.886Z] 75754.67 IOPS, 295.92 MiB/s [2024-11-25T12:13:15.456Z] 74976.00 IOPS, 292.88 MiB/s [2024-11-25T12:13:15.456Z] 75443.20 IOPS, 294.70 MiB/s 00:15:14.376 Latency(us) 00:15:14.376 [2024-11-25T12:13:15.456Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:14.376 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:15:14.376 xnvme_bdev : 5.00 75426.06 294.63 0.00 0.00 845.07 401.72 2936.52 00:15:14.376 [2024-11-25T12:13:15.456Z] =================================================================================================================== 00:15:14.376 [2024-11-25T12:13:15.456Z] Total : 75426.06 294.63 0.00 0.00 845.07 401.72 2936.52 00:15:15.319 12:13:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:15.319 12:13:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:15.319 12:13:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:15:15.319 12:13:16 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:15.319 12:13:16 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:15.319 { 00:15:15.319 "subsystems": [ 00:15:15.319 { 00:15:15.319 "subsystem": "bdev", 00:15:15.320 "config": [ 00:15:15.320 { 00:15:15.320 "params": { 00:15:15.320 "io_mechanism": "io_uring_cmd", 00:15:15.320 "conserve_cpu": true, 00:15:15.320 "filename": "/dev/ng0n1", 00:15:15.320 "name": "xnvme_bdev" 00:15:15.320 }, 00:15:15.320 "method": "bdev_xnvme_create" 00:15:15.320 }, 00:15:15.320 { 00:15:15.320 "method": "bdev_wait_for_examine" 00:15:15.320 } 00:15:15.320 ] 00:15:15.320 } 00:15:15.320 ] 00:15:15.320 } 00:15:15.320 [2024-11-25 12:13:16.279901] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:15:15.320 [2024-11-25 12:13:16.280066] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71872 ] 00:15:15.582 [2024-11-25 12:13:16.445864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.582 [2024-11-25 12:13:16.577500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:15.843 Running I/O for 5 seconds... 00:15:17.799 30432.00 IOPS, 118.88 MiB/s [2024-11-25T12:13:20.260Z] 32962.50 IOPS, 128.76 MiB/s [2024-11-25T12:13:21.225Z] 32391.00 IOPS, 126.53 MiB/s [2024-11-25T12:13:22.191Z] 26761.00 IOPS, 104.54 MiB/s [2024-11-25T12:13:22.191Z] 24536.00 IOPS, 95.84 MiB/s 00:15:21.111 Latency(us) 00:15:21.111 [2024-11-25T12:13:22.191Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:21.111 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:15:21.111 xnvme_bdev : 5.01 24513.83 95.76 0.00 0.00 2603.44 92.95 464599.83 00:15:21.111 [2024-11-25T12:13:22.191Z] =================================================================================================================== 00:15:21.111 [2024-11-25T12:13:22.191Z] Total : 24513.83 95.76 0.00 0.00 2603.44 92.95 464599.83 00:15:21.684 00:15:21.684 real 0m25.446s 00:15:21.684 user 0m17.867s 00:15:21.684 sys 0m5.671s 00:15:21.684 12:13:22 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:21.684 12:13:22 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:21.684 ************************************ 00:15:21.684 END TEST xnvme_bdevperf 00:15:21.684 ************************************ 00:15:21.684 12:13:22 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:15:21.684 12:13:22 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:21.684 12:13:22 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:21.684 12:13:22 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:21.684 ************************************ 00:15:21.684 START TEST xnvme_fio_plugin 00:15:21.684 ************************************ 00:15:21.684 12:13:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:15:21.684 12:13:22 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:15:21.684 12:13:22 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:15:21.684 12:13:22 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:21.684 12:13:22 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:21.684 12:13:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:21.684 12:13:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:21.684 12:13:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:21.684 12:13:22 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:21.684 12:13:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:21.684 12:13:22 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:21.684 12:13:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:21.684 12:13:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:21.684 12:13:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:21.684 12:13:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:21.684 12:13:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:21.685 12:13:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:21.685 12:13:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:21.685 12:13:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:21.947 12:13:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:21.948 12:13:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:21.948 12:13:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:21.948 12:13:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:21.948 12:13:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:21.948 { 00:15:21.948 "subsystems": [ 00:15:21.948 { 00:15:21.948 "subsystem": "bdev", 00:15:21.948 "config": [ 00:15:21.948 { 00:15:21.948 "params": { 00:15:21.948 "io_mechanism": "io_uring_cmd", 00:15:21.948 "conserve_cpu": true, 00:15:21.948 "filename": "/dev/ng0n1", 00:15:21.948 "name": "xnvme_bdev" 00:15:21.948 }, 00:15:21.948 "method": "bdev_xnvme_create" 00:15:21.948 }, 00:15:21.948 { 00:15:21.948 "method": "bdev_wait_for_examine" 00:15:21.948 } 00:15:21.948 ] 00:15:21.948 } 00:15:21.948 ] 00:15:21.948 } 00:15:21.948 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:21.948 fio-3.35 00:15:21.948 Starting 1 thread 00:15:28.544 00:15:28.544 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71992: Mon Nov 25 12:13:28 2024 00:15:28.544 read: IOPS=33.5k, BW=131MiB/s (137MB/s)(655MiB/5001msec) 00:15:28.544 slat (usec): min=2, max=103, avg= 4.43, stdev= 2.88 00:15:28.544 clat (usec): min=821, max=4982, avg=1725.47, stdev=364.02 00:15:28.544 lat (usec): min=824, max=4997, avg=1729.90, stdev=364.81 00:15:28.544 clat percentiles (usec): 00:15:28.544 | 1.00th=[ 1057], 5.00th=[ 1221], 10.00th=[ 1303], 20.00th=[ 1418], 00:15:28.544 | 30.00th=[ 1516], 40.00th=[ 1598], 50.00th=[ 1680], 60.00th=[ 1778], 00:15:28.544 | 70.00th=[ 1876], 80.00th=[ 2008], 90.00th=[ 2212], 95.00th=[ 2376], 00:15:28.544 | 99.00th=[ 2737], 99.50th=[ 2900], 99.90th=[ 3458], 99.95th=[ 3785], 00:15:28.544 | 99.99th=[ 4883] 00:15:28.544 bw ( KiB/s): min=126464, max=149504, per=99.49%, avg=133431.33, stdev=8786.85, samples=9 00:15:28.544 iops : min=31616, max=37376, avg=33357.78, stdev=2196.73, samples=9 00:15:28.544 lat (usec) : 1000=0.41% 00:15:28.544 lat (msec) : 2=79.23%, 4=20.32%, 10=0.04% 00:15:28.544 cpu : usr=56.36%, sys=40.08%, ctx=13, majf=0, minf=762 00:15:28.544 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:15:28.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:28.544 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:15:28.544 issued rwts: total=167680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:28.544 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:28.544 00:15:28.544 Run status group 0 (all jobs): 00:15:28.544 READ: bw=131MiB/s (137MB/s), 131MiB/s-131MiB/s (137MB/s-137MB/s), io=655MiB (687MB), run=5001-5001msec 00:15:28.544 ----------------------------------------------------- 00:15:28.544 Suppressions used: 00:15:28.544 count bytes template 00:15:28.544 1 11 /usr/src/fio/parse.c 00:15:28.544 1 8 libtcmalloc_minimal.so 00:15:28.544 1 904 libcrypto.so 00:15:28.544 ----------------------------------------------------- 00:15:28.544 00:15:28.544 12:13:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:28.544 12:13:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:28.544 12:13:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:28.544 12:13:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:28.544 12:13:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:28.544 12:13:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:28.544 12:13:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:28.544 12:13:29 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:28.544 12:13:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:28.544 12:13:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:28.544 12:13:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:28.544 12:13:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:28.544 12:13:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:28.544 12:13:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:28.544 12:13:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:28.544 12:13:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:28.805 12:13:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:28.805 12:13:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:28.805 12:13:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:28.805 12:13:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:28.805 12:13:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:28.805 { 00:15:28.805 "subsystems": [ 00:15:28.805 { 00:15:28.805 "subsystem": "bdev", 00:15:28.805 "config": [ 00:15:28.805 { 00:15:28.805 "params": { 00:15:28.805 "io_mechanism": "io_uring_cmd", 00:15:28.805 "conserve_cpu": true, 00:15:28.805 "filename": "/dev/ng0n1", 00:15:28.805 "name": "xnvme_bdev" 00:15:28.805 }, 00:15:28.805 "method": "bdev_xnvme_create" 00:15:28.805 }, 00:15:28.805 { 00:15:28.805 "method": "bdev_wait_for_examine" 00:15:28.805 } 00:15:28.805 ] 00:15:28.805 } 00:15:28.805 ] 00:15:28.805 } 00:15:28.805 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:28.805 fio-3.35 00:15:28.805 Starting 1 thread 00:15:35.446 00:15:35.446 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72083: Mon Nov 25 12:13:35 2024 00:15:35.446 write: IOPS=7241, BW=28.3MiB/s (29.7MB/s)(142MiB/5011msec); 0 zone resets 00:15:35.446 slat (usec): min=2, max=255, avg= 4.10, stdev= 3.29 00:15:35.446 clat (usec): min=55, max=301921, avg=8782.22, stdev=15693.50 00:15:35.446 lat (usec): min=58, max=301927, avg=8786.32, stdev=15693.51 00:15:35.446 clat percentiles (usec): 00:15:35.446 | 1.00th=[ 105], 5.00th=[ 180], 10.00th=[ 355], 20.00th=[ 537], 00:15:35.446 | 30.00th=[ 775], 40.00th=[ 1237], 50.00th=[ 1696], 60.00th=[ 2180], 00:15:35.446 | 70.00th=[ 16909], 80.00th=[ 19268], 90.00th=[ 21365], 95.00th=[ 23200], 00:15:35.446 | 99.00th=[ 30540], 99.50th=[ 62129], 99.90th=[291505], 99.95th=[299893], 00:15:35.446 | 99.99th=[299893] 00:15:35.446 bw ( KiB/s): min=18464, max=62088, per=100.00%, avg=28980.80, stdev=11941.58, samples=10 00:15:35.446 iops : min= 4616, max=15522, avg=7245.20, stdev=2985.39, samples=10 00:15:35.446 lat (usec) : 100=0.74%, 250=6.15%, 500=11.74%, 750=10.03%, 1000=7.42% 00:15:35.446 lat (msec) : 2=21.86%, 4=3.93%, 10=0.01%, 20=22.44%, 50=15.15% 00:15:35.446 lat (msec) : 100=0.35%, 500=0.18% 00:15:35.446 cpu : usr=87.29%, sys=7.82%, ctx=8, majf=0, minf=762 00:15:35.446 IO depths : 1=0.3%, 2=0.6%, 4=1.3%, 8=2.6%, 16=5.3%, 32=66.8%, >=64=23.2% 00:15:35.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.446 complete : 0=0.0%, 4=95.2%, 8=3.3%, 16=1.2%, 32=0.1%, 64=0.3%, >=64=0.0% 00:15:35.446 issued rwts: total=0,36289,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.446 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:35.446 00:15:35.446 Run status group 0 (all jobs): 00:15:35.446 WRITE: bw=28.3MiB/s (29.7MB/s), 28.3MiB/s-28.3MiB/s (29.7MB/s-29.7MB/s), io=142MiB (149MB), run=5011-5011msec 00:15:35.446 ----------------------------------------------------- 00:15:35.446 Suppressions used: 00:15:35.446 count bytes template 00:15:35.446 1 11 /usr/src/fio/parse.c 00:15:35.446 1 8 libtcmalloc_minimal.so 00:15:35.446 1 904 libcrypto.so 00:15:35.446 ----------------------------------------------------- 00:15:35.446 00:15:35.447 ************************************ 00:15:35.447 END TEST xnvme_fio_plugin 00:15:35.447 ************************************ 00:15:35.447 00:15:35.447 real 0m13.601s 00:15:35.447 user 0m9.905s 00:15:35.447 sys 0m2.946s 00:15:35.447 12:13:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:35.447 12:13:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:35.447 12:13:36 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 71586 00:15:35.447 12:13:36 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 71586 ']' 00:15:35.447 12:13:36 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 71586 00:15:35.447 Process with pid 71586 is not found 00:15:35.447 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (71586) - No such process 00:15:35.447 12:13:36 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 71586 is not found' 00:15:35.447 ************************************ 00:15:35.447 END TEST nvme_xnvme 00:15:35.447 ************************************ 00:15:35.447 00:15:35.447 real 3m30.495s 00:15:35.447 user 2m6.482s 00:15:35.447 sys 1m9.011s 00:15:35.447 12:13:36 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:35.447 12:13:36 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:35.447 12:13:36 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:35.447 12:13:36 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:35.447 12:13:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:35.447 12:13:36 -- common/autotest_common.sh@10 -- # set +x 00:15:35.447 ************************************ 00:15:35.447 START TEST blockdev_xnvme 00:15:35.447 ************************************ 00:15:35.447 12:13:36 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:35.447 * Looking for test storage... 00:15:35.707 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:15:35.707 12:13:36 blockdev_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:35.707 12:13:36 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:15:35.707 12:13:36 blockdev_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:35.707 12:13:36 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:35.707 12:13:36 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:35.707 12:13:36 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:35.707 12:13:36 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:35.707 12:13:36 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:15:35.707 12:13:36 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:15:35.707 12:13:36 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:15:35.707 12:13:36 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:15:35.707 12:13:36 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:15:35.707 12:13:36 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:15:35.707 12:13:36 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:15:35.707 12:13:36 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:35.707 12:13:36 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:15:35.707 12:13:36 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:15:35.707 12:13:36 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:35.707 12:13:36 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:35.707 12:13:36 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:15:35.707 12:13:36 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:15:35.707 12:13:36 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:35.707 12:13:36 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:15:35.707 12:13:36 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:15:35.707 12:13:36 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:15:35.707 12:13:36 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:15:35.707 12:13:36 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:35.707 12:13:36 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:15:35.707 12:13:36 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:15:35.707 12:13:36 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:35.707 12:13:36 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:35.707 12:13:36 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:15:35.707 12:13:36 blockdev_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:35.707 12:13:36 blockdev_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:35.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:35.707 --rc genhtml_branch_coverage=1 00:15:35.707 --rc genhtml_function_coverage=1 00:15:35.707 --rc genhtml_legend=1 00:15:35.707 --rc geninfo_all_blocks=1 00:15:35.707 --rc geninfo_unexecuted_blocks=1 00:15:35.707 00:15:35.707 ' 00:15:35.707 12:13:36 blockdev_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:35.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:35.707 --rc genhtml_branch_coverage=1 00:15:35.707 --rc genhtml_function_coverage=1 00:15:35.707 --rc genhtml_legend=1 00:15:35.707 --rc geninfo_all_blocks=1 00:15:35.707 --rc geninfo_unexecuted_blocks=1 00:15:35.707 00:15:35.707 ' 00:15:35.707 12:13:36 blockdev_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:35.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:35.707 --rc genhtml_branch_coverage=1 00:15:35.707 --rc genhtml_function_coverage=1 00:15:35.707 --rc genhtml_legend=1 00:15:35.707 --rc geninfo_all_blocks=1 00:15:35.707 --rc geninfo_unexecuted_blocks=1 00:15:35.707 00:15:35.707 ' 00:15:35.707 12:13:36 blockdev_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:35.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:35.707 --rc genhtml_branch_coverage=1 00:15:35.707 --rc genhtml_function_coverage=1 00:15:35.707 --rc genhtml_legend=1 00:15:35.707 --rc geninfo_all_blocks=1 00:15:35.707 --rc geninfo_unexecuted_blocks=1 00:15:35.707 00:15:35.707 ' 00:15:35.707 12:13:36 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:15:35.707 12:13:36 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:15:35.707 12:13:36 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:15:35.707 12:13:36 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:35.707 12:13:36 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:15:35.707 12:13:36 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:15:35.707 12:13:36 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:15:35.707 12:13:36 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:15:35.707 12:13:36 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:15:35.707 12:13:36 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:15:35.707 12:13:36 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:15:35.707 12:13:36 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:15:35.707 12:13:36 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:15:35.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:35.707 12:13:36 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:15:35.707 12:13:36 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:15:35.707 12:13:36 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:15:35.707 12:13:36 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:15:35.707 12:13:36 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:15:35.707 12:13:36 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:15:35.707 12:13:36 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:15:35.707 12:13:36 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:15:35.707 12:13:36 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:15:35.707 12:13:36 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:15:35.707 12:13:36 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:15:35.707 12:13:36 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=72212 00:15:35.707 12:13:36 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:15:35.707 12:13:36 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 72212 00:15:35.707 12:13:36 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 72212 ']' 00:15:35.707 12:13:36 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:35.707 12:13:36 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:35.707 12:13:36 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:35.707 12:13:36 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:35.707 12:13:36 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:35.707 12:13:36 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:15:35.707 [2024-11-25 12:13:36.690956] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:15:35.707 [2024-11-25 12:13:36.691246] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72212 ] 00:15:36.003 [2024-11-25 12:13:36.846890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:36.003 [2024-11-25 12:13:36.950265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:36.572 12:13:37 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:36.572 12:13:37 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:15:36.572 12:13:37 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:15:36.572 12:13:37 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:15:36.572 12:13:37 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:15:36.572 12:13:37 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:15:36.572 12:13:37 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:37.142 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:37.715 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:15:37.715 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:15:37.715 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:15:37.715 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:15:37.715 12:13:38 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:15:37.715 12:13:38 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:15:37.715 12:13:38 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:15:37.715 12:13:38 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local nvme bdf 00:15:37.715 12:13:38 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:37.715 12:13:38 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:15:37.716 12:13:38 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:15:37.716 12:13:38 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:37.716 12:13:38 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:37.716 12:13:38 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:37.716 12:13:38 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n2 00:15:37.716 12:13:38 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:15:37.716 12:13:38 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:15:37.716 12:13:38 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:37.716 12:13:38 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:37.716 12:13:38 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n3 00:15:37.716 12:13:38 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:15:37.716 12:13:38 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:15:37.716 12:13:38 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:37.716 12:13:38 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:37.716 12:13:38 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:15:37.716 12:13:38 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:15:37.716 12:13:38 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:15:37.716 12:13:38 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:37.716 12:13:38 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:37.716 12:13:38 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2c2n1 00:15:37.716 12:13:38 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2c2n1 00:15:37.716 12:13:38 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2c2n1/queue/zoned ]] 00:15:37.716 12:13:38 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:37.716 12:13:38 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:37.716 12:13:38 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:15:37.716 12:13:38 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:15:37.716 12:13:38 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:15:37.716 12:13:38 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:37.716 12:13:38 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:37.716 12:13:38 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:15:37.716 12:13:38 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:15:37.716 12:13:38 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:15:37.716 12:13:38 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:37.716 12:13:38 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:37.716 12:13:38 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:15:37.716 12:13:38 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:37.716 12:13:38 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:37.716 12:13:38 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:37.716 12:13:38 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:15:37.716 12:13:38 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:37.716 12:13:38 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:37.716 12:13:38 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:37.716 12:13:38 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:15:37.716 12:13:38 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:37.716 12:13:38 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:37.716 12:13:38 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:37.716 12:13:38 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:15:37.716 12:13:38 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:37.716 12:13:38 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:37.716 12:13:38 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:37.716 12:13:38 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:15:37.716 12:13:38 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:37.716 12:13:38 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:37.716 12:13:38 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:37.716 12:13:38 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:15:37.716 12:13:38 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:37.716 12:13:38 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:37.716 12:13:38 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:15:37.716 12:13:38 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:15:37.716 12:13:38 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.716 12:13:38 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:37.716 12:13:38 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:15:37.716 nvme0n1 00:15:37.716 nvme0n2 00:15:37.716 nvme0n3 00:15:37.716 nvme1n1 00:15:37.716 nvme2n1 00:15:37.716 nvme3n1 00:15:37.716 12:13:38 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.716 12:13:38 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:15:37.716 12:13:38 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.716 12:13:38 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:37.716 12:13:38 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.716 12:13:38 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:15:37.716 12:13:38 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:15:37.716 12:13:38 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.716 12:13:38 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:37.716 12:13:38 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.716 12:13:38 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:15:37.716 12:13:38 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.717 12:13:38 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:37.717 12:13:38 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.717 12:13:38 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:15:37.717 12:13:38 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.717 12:13:38 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:37.717 12:13:38 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.717 12:13:38 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:15:37.717 12:13:38 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:15:37.717 12:13:38 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.717 12:13:38 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:37.717 12:13:38 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:15:37.717 12:13:38 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.717 12:13:38 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:15:37.717 12:13:38 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:15:37.717 12:13:38 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "83118c04-399f-4e00-bcef-93e29f8b2048"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "83118c04-399f-4e00-bcef-93e29f8b2048",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "969f5240-e562-410c-abf0-67573b6fdbce"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "969f5240-e562-410c-abf0-67573b6fdbce",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "bdb1ab46-1d71-4916-ac4c-3b4ca93a479c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "bdb1ab46-1d71-4916-ac4c-3b4ca93a479c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "67f02596-c2dc-426c-bdf1-6b2ab1558c43"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "67f02596-c2dc-426c-bdf1-6b2ab1558c43",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "bc9d56f7-6ecc-445b-a356-a5aa5268c183"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "bc9d56f7-6ecc-445b-a356-a5aa5268c183",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "18589ab9-44d4-46fe-809c-e842035ff257"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "18589ab9-44d4-46fe-809c-e842035ff257",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:37.978 12:13:38 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:15:37.978 12:13:38 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:15:37.978 12:13:38 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:15:37.978 12:13:38 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 72212 00:15:37.978 12:13:38 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 72212 ']' 00:15:37.978 12:13:38 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 72212 00:15:37.978 12:13:38 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:15:37.978 12:13:38 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:37.978 12:13:38 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72212 00:15:37.978 killing process with pid 72212 00:15:37.978 12:13:38 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:37.978 12:13:38 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:37.978 12:13:38 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72212' 00:15:37.978 12:13:38 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 72212 00:15:37.978 12:13:38 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 72212 00:15:39.374 12:13:40 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:39.374 12:13:40 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:39.374 12:13:40 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:39.374 12:13:40 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:39.374 12:13:40 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:39.374 ************************************ 00:15:39.374 START TEST bdev_hello_world 00:15:39.374 ************************************ 00:15:39.374 12:13:40 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:39.374 [2024-11-25 12:13:40.416319] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:15:39.374 [2024-11-25 12:13:40.416594] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72496 ] 00:15:39.634 [2024-11-25 12:13:40.578226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.634 [2024-11-25 12:13:40.680209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:40.200 [2024-11-25 12:13:41.046614] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:15:40.200 [2024-11-25 12:13:41.046668] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:15:40.200 [2024-11-25 12:13:41.046687] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:15:40.200 [2024-11-25 12:13:41.048576] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:15:40.200 [2024-11-25 12:13:41.049719] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:15:40.200 [2024-11-25 12:13:41.049751] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:15:40.200 [2024-11-25 12:13:41.050111] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:15:40.200 00:15:40.200 [2024-11-25 12:13:41.050130] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:15:40.769 ************************************ 00:15:40.769 END TEST bdev_hello_world 00:15:40.769 ************************************ 00:15:40.769 00:15:40.769 real 0m1.411s 00:15:40.769 user 0m1.095s 00:15:40.770 sys 0m0.167s 00:15:40.770 12:13:41 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:40.770 12:13:41 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:15:40.770 12:13:41 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:15:40.770 12:13:41 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:40.770 12:13:41 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:40.770 12:13:41 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:40.770 ************************************ 00:15:40.770 START TEST bdev_bounds 00:15:40.770 ************************************ 00:15:40.770 12:13:41 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:15:40.770 Process bdevio pid: 72527 00:15:40.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:40.770 12:13:41 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=72527 00:15:40.770 12:13:41 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:40.770 12:13:41 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:15:40.770 12:13:41 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 72527' 00:15:40.770 12:13:41 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 72527 00:15:40.770 12:13:41 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 72527 ']' 00:15:40.770 12:13:41 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:40.770 12:13:41 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:40.770 12:13:41 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:40.770 12:13:41 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:40.770 12:13:41 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:41.029 [2024-11-25 12:13:41.904796] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:15:41.029 [2024-11-25 12:13:41.905195] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72527 ] 00:15:41.029 [2024-11-25 12:13:42.086139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:41.289 [2024-11-25 12:13:42.194099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:41.289 [2024-11-25 12:13:42.194480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:41.289 [2024-11-25 12:13:42.194483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:41.859 12:13:42 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:41.859 12:13:42 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:15:41.859 12:13:42 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:15:41.859 I/O targets: 00:15:41.859 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:41.859 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:41.859 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:41.859 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:15:41.859 nvme2n1: 262144 blocks of 4096 bytes (1024 MiB) 00:15:41.859 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:15:41.859 00:15:41.859 00:15:41.859 CUnit - A unit testing framework for C - Version 2.1-3 00:15:41.859 http://cunit.sourceforge.net/ 00:15:41.859 00:15:41.859 00:15:41.859 Suite: bdevio tests on: nvme3n1 00:15:41.859 Test: blockdev write read block ...passed 00:15:41.859 Test: blockdev write zeroes read block ...passed 00:15:41.859 Test: blockdev write zeroes read no split ...passed 00:15:41.859 Test: blockdev write zeroes read split ...passed 00:15:41.859 Test: blockdev write zeroes read split partial ...passed 00:15:41.859 Test: blockdev reset ...passed 00:15:41.859 Test: blockdev write read 8 blocks ...passed 00:15:41.859 Test: blockdev write read size > 128k ...passed 00:15:41.859 Test: blockdev write read invalid size ...passed 00:15:41.859 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:41.859 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:41.859 Test: blockdev write read max offset ...passed 00:15:41.859 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:41.859 Test: blockdev writev readv 8 blocks ...passed 00:15:41.859 Test: blockdev writev readv 30 x 1block ...passed 00:15:41.859 Test: blockdev writev readv block ...passed 00:15:41.859 Test: blockdev writev readv size > 128k ...passed 00:15:41.859 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:41.859 Test: blockdev comparev and writev ...passed 00:15:41.859 Test: blockdev nvme passthru rw ...passed 00:15:41.859 Test: blockdev nvme passthru vendor specific ...passed 00:15:41.859 Test: blockdev nvme admin passthru ...passed 00:15:41.859 Test: blockdev copy ...passed 00:15:41.859 Suite: bdevio tests on: nvme2n1 00:15:41.859 Test: blockdev write read block ...passed 00:15:41.859 Test: blockdev write zeroes read block ...passed 00:15:41.859 Test: blockdev write zeroes read no split ...passed 00:15:42.120 Test: blockdev write zeroes read split ...passed 00:15:42.120 Test: blockdev write zeroes read split partial ...passed 00:15:42.120 Test: blockdev reset ...passed 00:15:42.120 Test: blockdev write read 8 blocks ...passed 00:15:42.120 Test: blockdev write read size > 128k ...passed 00:15:42.120 Test: blockdev write read invalid size ...passed 00:15:42.120 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:42.120 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:42.120 Test: blockdev write read max offset ...passed 00:15:42.120 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:42.120 Test: blockdev writev readv 8 blocks ...passed 00:15:42.120 Test: blockdev writev readv 30 x 1block ...passed 00:15:42.120 Test: blockdev writev readv block ...passed 00:15:42.120 Test: blockdev writev readv size > 128k ...passed 00:15:42.120 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:42.120 Test: blockdev comparev and writev ...passed 00:15:42.120 Test: blockdev nvme passthru rw ...passed 00:15:42.120 Test: blockdev nvme passthru vendor specific ...passed 00:15:42.120 Test: blockdev nvme admin passthru ...passed 00:15:42.120 Test: blockdev copy ...passed 00:15:42.120 Suite: bdevio tests on: nvme1n1 00:15:42.120 Test: blockdev write read block ...passed 00:15:42.120 Test: blockdev write zeroes read block ...passed 00:15:42.120 Test: blockdev write zeroes read no split ...passed 00:15:42.120 Test: blockdev write zeroes read split ...passed 00:15:42.120 Test: blockdev write zeroes read split partial ...passed 00:15:42.120 Test: blockdev reset ...passed 00:15:42.120 Test: blockdev write read 8 blocks ...passed 00:15:42.120 Test: blockdev write read size > 128k ...passed 00:15:42.120 Test: blockdev write read invalid size ...passed 00:15:42.120 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:42.120 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:42.120 Test: blockdev write read max offset ...passed 00:15:42.120 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:42.120 Test: blockdev writev readv 8 blocks ...passed 00:15:42.120 Test: blockdev writev readv 30 x 1block ...passed 00:15:42.120 Test: blockdev writev readv block ...passed 00:15:42.120 Test: blockdev writev readv size > 128k ...passed 00:15:42.120 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:42.120 Test: blockdev comparev and writev ...passed 00:15:42.120 Test: blockdev nvme passthru rw ...passed 00:15:42.120 Test: blockdev nvme passthru vendor specific ...passed 00:15:42.120 Test: blockdev nvme admin passthru ...passed 00:15:42.120 Test: blockdev copy ...passed 00:15:42.120 Suite: bdevio tests on: nvme0n3 00:15:42.120 Test: blockdev write read block ...passed 00:15:42.120 Test: blockdev write zeroes read block ...passed 00:15:42.120 Test: blockdev write zeroes read no split ...passed 00:15:42.120 Test: blockdev write zeroes read split ...passed 00:15:42.120 Test: blockdev write zeroes read split partial ...passed 00:15:42.120 Test: blockdev reset ...passed 00:15:42.120 Test: blockdev write read 8 blocks ...passed 00:15:42.120 Test: blockdev write read size > 128k ...passed 00:15:42.120 Test: blockdev write read invalid size ...passed 00:15:42.120 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:42.120 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:42.120 Test: blockdev write read max offset ...passed 00:15:42.120 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:42.120 Test: blockdev writev readv 8 blocks ...passed 00:15:42.120 Test: blockdev writev readv 30 x 1block ...passed 00:15:42.121 Test: blockdev writev readv block ...passed 00:15:42.121 Test: blockdev writev readv size > 128k ...passed 00:15:42.121 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:42.121 Test: blockdev comparev and writev ...passed 00:15:42.121 Test: blockdev nvme passthru rw ...passed 00:15:42.121 Test: blockdev nvme passthru vendor specific ...passed 00:15:42.121 Test: blockdev nvme admin passthru ...passed 00:15:42.121 Test: blockdev copy ...passed 00:15:42.121 Suite: bdevio tests on: nvme0n2 00:15:42.121 Test: blockdev write read block ...passed 00:15:42.121 Test: blockdev write zeroes read block ...passed 00:15:42.121 Test: blockdev write zeroes read no split ...passed 00:15:42.121 Test: blockdev write zeroes read split ...passed 00:15:42.380 Test: blockdev write zeroes read split partial ...passed 00:15:42.380 Test: blockdev reset ...passed 00:15:42.380 Test: blockdev write read 8 blocks ...passed 00:15:42.380 Test: blockdev write read size > 128k ...passed 00:15:42.380 Test: blockdev write read invalid size ...passed 00:15:42.380 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:42.380 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:42.380 Test: blockdev write read max offset ...passed 00:15:42.380 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:42.380 Test: blockdev writev readv 8 blocks ...passed 00:15:42.380 Test: blockdev writev readv 30 x 1block ...passed 00:15:42.380 Test: blockdev writev readv block ...passed 00:15:42.380 Test: blockdev writev readv size > 128k ...passed 00:15:42.380 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:42.380 Test: blockdev comparev and writev ...passed 00:15:42.380 Test: blockdev nvme passthru rw ...passed 00:15:42.380 Test: blockdev nvme passthru vendor specific ...passed 00:15:42.380 Test: blockdev nvme admin passthru ...passed 00:15:42.380 Test: blockdev copy ...passed 00:15:42.380 Suite: bdevio tests on: nvme0n1 00:15:42.380 Test: blockdev write read block ...passed 00:15:42.380 Test: blockdev write zeroes read block ...passed 00:15:42.380 Test: blockdev write zeroes read no split ...passed 00:15:42.640 Test: blockdev write zeroes read split ...passed 00:15:42.640 Test: blockdev write zeroes read split partial ...passed 00:15:42.640 Test: blockdev reset ...passed 00:15:42.640 Test: blockdev write read 8 blocks ...passed 00:15:42.640 Test: blockdev write read size > 128k ...passed 00:15:42.640 Test: blockdev write read invalid size ...passed 00:15:42.640 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:42.640 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:42.640 Test: blockdev write read max offset ...passed 00:15:42.640 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:42.640 Test: blockdev writev readv 8 blocks ...passed 00:15:42.640 Test: blockdev writev readv 30 x 1block ...passed 00:15:42.640 Test: blockdev writev readv block ...passed 00:15:42.640 Test: blockdev writev readv size > 128k ...passed 00:15:42.640 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:42.640 Test: blockdev comparev and writev ...passed 00:15:42.640 Test: blockdev nvme passthru rw ...passed 00:15:42.640 Test: blockdev nvme passthru vendor specific ...passed 00:15:42.640 Test: blockdev nvme admin passthru ...passed 00:15:42.640 Test: blockdev copy ...passed 00:15:42.640 00:15:42.640 Run Summary: Type Total Ran Passed Failed Inactive 00:15:42.640 suites 6 6 n/a 0 0 00:15:42.640 tests 138 138 138 0 0 00:15:42.640 asserts 780 780 780 0 n/a 00:15:42.640 00:15:42.640 Elapsed time = 1.867 seconds 00:15:42.640 0 00:15:42.640 12:13:43 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 72527 00:15:42.640 12:13:43 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 72527 ']' 00:15:42.640 12:13:43 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 72527 00:15:42.640 12:13:43 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:15:42.640 12:13:43 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:42.640 12:13:43 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72527 00:15:42.640 killing process with pid 72527 00:15:42.640 12:13:43 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:42.640 12:13:43 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:42.641 12:13:43 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72527' 00:15:42.641 12:13:43 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 72527 00:15:42.641 12:13:43 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 72527 00:15:43.579 ************************************ 00:15:43.579 END TEST bdev_bounds 00:15:43.579 ************************************ 00:15:43.579 12:13:44 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:15:43.579 00:15:43.579 real 0m2.566s 00:15:43.579 user 0m6.105s 00:15:43.579 sys 0m0.318s 00:15:43.579 12:13:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:43.579 12:13:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:43.579 12:13:44 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:15:43.579 12:13:44 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:43.579 12:13:44 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:43.579 12:13:44 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:43.579 ************************************ 00:15:43.579 START TEST bdev_nbd 00:15:43.579 ************************************ 00:15:43.579 12:13:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:15:43.579 12:13:44 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:15:43.579 12:13:44 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:15:43.579 12:13:44 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:43.579 12:13:44 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:43.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:43.579 12:13:44 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:43.579 12:13:44 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:15:43.579 12:13:44 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:15:43.579 12:13:44 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:15:43.579 12:13:44 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:15:43.579 12:13:44 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:15:43.579 12:13:44 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:15:43.579 12:13:44 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:43.579 12:13:44 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:15:43.579 12:13:44 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:43.579 12:13:44 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:15:43.579 12:13:44 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=72587 00:15:43.579 12:13:44 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:15:43.579 12:13:44 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 72587 /var/tmp/spdk-nbd.sock 00:15:43.579 12:13:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 72587 ']' 00:15:43.579 12:13:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:43.579 12:13:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:43.579 12:13:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:43.579 12:13:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:43.579 12:13:44 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:43.579 12:13:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:43.579 [2024-11-25 12:13:44.519929] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:15:43.579 [2024-11-25 12:13:44.520267] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:43.840 [2024-11-25 12:13:44.674098] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:43.840 [2024-11-25 12:13:44.777745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.409 12:13:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:44.409 12:13:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:15:44.409 12:13:45 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:15:44.409 12:13:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:44.409 12:13:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:44.409 12:13:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:15:44.409 12:13:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:15:44.409 12:13:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:44.409 12:13:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:44.409 12:13:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:15:44.409 12:13:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:15:44.409 12:13:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:15:44.409 12:13:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:15:44.409 12:13:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:44.409 12:13:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:15:44.670 12:13:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:15:44.670 12:13:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:15:44.670 12:13:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:15:44.670 12:13:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:44.670 12:13:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:44.670 12:13:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:44.670 12:13:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:44.670 12:13:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:44.670 12:13:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:44.670 12:13:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:44.670 12:13:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:44.670 12:13:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:44.670 1+0 records in 00:15:44.670 1+0 records out 00:15:44.670 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00168726 s, 2.4 MB/s 00:15:44.670 12:13:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:44.670 12:13:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:44.670 12:13:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:44.670 12:13:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:44.670 12:13:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:44.670 12:13:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:44.670 12:13:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:44.670 12:13:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:15:44.931 12:13:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:15:44.931 12:13:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:15:44.931 12:13:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:15:44.931 12:13:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:44.931 12:13:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:44.931 12:13:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:44.931 12:13:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:44.931 12:13:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:44.931 12:13:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:44.931 12:13:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:44.931 12:13:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:44.931 12:13:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:44.931 1+0 records in 00:15:44.931 1+0 records out 00:15:44.931 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00070449 s, 5.8 MB/s 00:15:44.931 12:13:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:44.931 12:13:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:44.931 12:13:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:44.931 12:13:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:44.931 12:13:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:44.931 12:13:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:44.931 12:13:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:44.931 12:13:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:15:45.191 12:13:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:15:45.191 12:13:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:15:45.191 12:13:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:15:45.191 12:13:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:15:45.191 12:13:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:45.191 12:13:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:45.191 12:13:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:45.191 12:13:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:15:45.191 12:13:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:45.191 12:13:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:45.191 12:13:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:45.191 12:13:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:45.191 1+0 records in 00:15:45.191 1+0 records out 00:15:45.191 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00095886 s, 4.3 MB/s 00:15:45.191 12:13:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:45.191 12:13:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:45.191 12:13:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:45.191 12:13:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:45.191 12:13:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:45.191 12:13:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:45.191 12:13:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:45.191 12:13:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:15:45.452 12:13:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:15:45.452 12:13:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:15:45.452 12:13:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:15:45.452 12:13:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:15:45.452 12:13:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:45.452 12:13:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:45.452 12:13:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:45.452 12:13:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:15:45.452 12:13:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:45.452 12:13:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:45.452 12:13:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:45.452 12:13:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:45.452 1+0 records in 00:15:45.452 1+0 records out 00:15:45.452 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00119519 s, 3.4 MB/s 00:15:45.452 12:13:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:45.452 12:13:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:45.452 12:13:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:45.452 12:13:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:45.452 12:13:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:45.452 12:13:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:45.452 12:13:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:45.452 12:13:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:15:45.710 12:13:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:15:45.710 12:13:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:15:45.710 12:13:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:15:45.710 12:13:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:15:45.710 12:13:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:45.710 12:13:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:45.710 12:13:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:45.710 12:13:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:15:45.710 12:13:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:45.710 12:13:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:45.710 12:13:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:45.710 12:13:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:45.710 1+0 records in 00:15:45.710 1+0 records out 00:15:45.710 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000561609 s, 7.3 MB/s 00:15:45.711 12:13:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:45.711 12:13:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:45.711 12:13:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:45.711 12:13:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:45.711 12:13:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:45.711 12:13:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:45.711 12:13:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:45.711 12:13:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:15:45.971 12:13:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:15:45.972 12:13:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:15:45.972 12:13:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:15:45.972 12:13:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:15:45.972 12:13:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:45.972 12:13:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:45.972 12:13:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:45.972 12:13:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:15:45.972 12:13:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:45.972 12:13:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:45.972 12:13:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:45.972 12:13:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:45.972 1+0 records in 00:15:45.972 1+0 records out 00:15:45.972 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00111712 s, 3.7 MB/s 00:15:45.972 12:13:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:45.972 12:13:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:45.972 12:13:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:45.972 12:13:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:45.972 12:13:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:45.972 12:13:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:45.972 12:13:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:45.972 12:13:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:46.232 12:13:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:15:46.232 { 00:15:46.232 "nbd_device": "/dev/nbd0", 00:15:46.232 "bdev_name": "nvme0n1" 00:15:46.232 }, 00:15:46.232 { 00:15:46.232 "nbd_device": "/dev/nbd1", 00:15:46.232 "bdev_name": "nvme0n2" 00:15:46.232 }, 00:15:46.232 { 00:15:46.232 "nbd_device": "/dev/nbd2", 00:15:46.232 "bdev_name": "nvme0n3" 00:15:46.232 }, 00:15:46.232 { 00:15:46.232 "nbd_device": "/dev/nbd3", 00:15:46.232 "bdev_name": "nvme1n1" 00:15:46.232 }, 00:15:46.232 { 00:15:46.232 "nbd_device": "/dev/nbd4", 00:15:46.232 "bdev_name": "nvme2n1" 00:15:46.232 }, 00:15:46.232 { 00:15:46.232 "nbd_device": "/dev/nbd5", 00:15:46.232 "bdev_name": "nvme3n1" 00:15:46.232 } 00:15:46.232 ]' 00:15:46.232 12:13:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:15:46.232 12:13:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:15:46.232 { 00:15:46.232 "nbd_device": "/dev/nbd0", 00:15:46.232 "bdev_name": "nvme0n1" 00:15:46.232 }, 00:15:46.232 { 00:15:46.232 "nbd_device": "/dev/nbd1", 00:15:46.232 "bdev_name": "nvme0n2" 00:15:46.232 }, 00:15:46.232 { 00:15:46.232 "nbd_device": "/dev/nbd2", 00:15:46.232 "bdev_name": "nvme0n3" 00:15:46.232 }, 00:15:46.232 { 00:15:46.232 "nbd_device": "/dev/nbd3", 00:15:46.232 "bdev_name": "nvme1n1" 00:15:46.232 }, 00:15:46.232 { 00:15:46.232 "nbd_device": "/dev/nbd4", 00:15:46.232 "bdev_name": "nvme2n1" 00:15:46.232 }, 00:15:46.232 { 00:15:46.232 "nbd_device": "/dev/nbd5", 00:15:46.232 "bdev_name": "nvme3n1" 00:15:46.232 } 00:15:46.232 ]' 00:15:46.232 12:13:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:15:46.232 12:13:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:15:46.232 12:13:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:46.232 12:13:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:15:46.232 12:13:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:46.232 12:13:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:46.232 12:13:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:46.232 12:13:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:46.493 12:13:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:46.493 12:13:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:46.493 12:13:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:46.493 12:13:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:46.493 12:13:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:46.493 12:13:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:46.493 12:13:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:46.493 12:13:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:46.493 12:13:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:46.493 12:13:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:46.752 12:13:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:46.752 12:13:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:46.752 12:13:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:46.752 12:13:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:46.752 12:13:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:46.752 12:13:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:46.752 12:13:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:46.752 12:13:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:46.752 12:13:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:46.752 12:13:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:15:46.752 12:13:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:15:46.752 12:13:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:15:46.752 12:13:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:15:46.752 12:13:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:46.752 12:13:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:46.752 12:13:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:15:46.752 12:13:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:46.752 12:13:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:46.752 12:13:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:46.752 12:13:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:15:47.014 12:13:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:15:47.014 12:13:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:15:47.014 12:13:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:15:47.014 12:13:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:47.014 12:13:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:47.014 12:13:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:15:47.014 12:13:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:47.014 12:13:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:47.014 12:13:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:47.014 12:13:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:15:47.274 12:13:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:15:47.274 12:13:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:15:47.274 12:13:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:15:47.274 12:13:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:47.274 12:13:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:47.274 12:13:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:15:47.274 12:13:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:47.274 12:13:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:47.274 12:13:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:47.274 12:13:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:15:47.534 12:13:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:15:47.534 12:13:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:15:47.534 12:13:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:15:47.534 12:13:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:47.534 12:13:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:47.534 12:13:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:15:47.534 12:13:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:47.534 12:13:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:47.534 12:13:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:47.534 12:13:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:47.534 12:13:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:47.795 12:13:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:47.795 12:13:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:47.795 12:13:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:47.795 12:13:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:47.795 12:13:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:47.795 12:13:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:47.795 12:13:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:47.795 12:13:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:47.795 12:13:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:47.795 12:13:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:15:47.795 12:13:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:15:47.795 12:13:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:15:47.795 12:13:48 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:47.795 12:13:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:47.795 12:13:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:47.795 12:13:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:47.795 12:13:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:47.795 12:13:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:47.795 12:13:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:47.795 12:13:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:47.795 12:13:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:47.795 12:13:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:47.795 12:13:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:47.795 12:13:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:47.795 12:13:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:15:47.795 12:13:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:47.795 12:13:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:47.795 12:13:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:15:48.056 /dev/nbd0 00:15:48.056 12:13:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:48.056 12:13:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:48.056 12:13:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:48.056 12:13:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:48.056 12:13:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:48.056 12:13:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:48.056 12:13:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:48.056 12:13:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:48.056 12:13:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:48.056 12:13:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:48.056 12:13:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:48.056 1+0 records in 00:15:48.056 1+0 records out 00:15:48.056 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00106244 s, 3.9 MB/s 00:15:48.056 12:13:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:48.056 12:13:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:48.056 12:13:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:48.056 12:13:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:48.056 12:13:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:48.056 12:13:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:48.056 12:13:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:48.056 12:13:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:15:48.321 /dev/nbd1 00:15:48.321 12:13:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:48.321 12:13:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:48.321 12:13:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:48.321 12:13:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:48.321 12:13:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:48.321 12:13:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:48.321 12:13:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:48.321 12:13:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:48.321 12:13:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:48.321 12:13:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:48.321 12:13:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:48.321 1+0 records in 00:15:48.321 1+0 records out 00:15:48.321 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000618302 s, 6.6 MB/s 00:15:48.321 12:13:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:48.321 12:13:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:48.321 12:13:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:48.321 12:13:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:48.321 12:13:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:48.321 12:13:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:48.321 12:13:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:48.321 12:13:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:15:48.321 /dev/nbd10 00:15:48.584 12:13:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:15:48.584 12:13:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:15:48.584 12:13:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:15:48.584 12:13:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:48.584 12:13:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:48.584 12:13:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:48.584 12:13:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:15:48.584 12:13:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:48.584 12:13:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:48.584 12:13:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:48.584 12:13:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:48.584 1+0 records in 00:15:48.584 1+0 records out 00:15:48.584 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00084478 s, 4.8 MB/s 00:15:48.584 12:13:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:48.584 12:13:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:48.584 12:13:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:48.584 12:13:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:48.584 12:13:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:48.584 12:13:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:48.584 12:13:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:48.584 12:13:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:15:48.584 /dev/nbd11 00:15:48.584 12:13:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:15:48.584 12:13:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:15:48.584 12:13:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:15:48.584 12:13:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:48.584 12:13:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:48.584 12:13:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:48.584 12:13:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:15:48.584 12:13:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:48.584 12:13:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:48.584 12:13:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:48.584 12:13:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:48.846 1+0 records in 00:15:48.846 1+0 records out 00:15:48.846 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00149789 s, 2.7 MB/s 00:15:48.846 12:13:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:48.846 12:13:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:48.846 12:13:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:48.846 12:13:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:48.846 12:13:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:48.846 12:13:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:48.846 12:13:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:48.846 12:13:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:15:48.846 /dev/nbd12 00:15:48.846 12:13:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:15:48.846 12:13:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:15:48.846 12:13:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:15:48.846 12:13:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:48.846 12:13:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:48.846 12:13:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:48.846 12:13:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:15:48.846 12:13:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:48.846 12:13:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:48.846 12:13:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:48.846 12:13:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:48.846 1+0 records in 00:15:48.846 1+0 records out 00:15:48.846 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00108062 s, 3.8 MB/s 00:15:48.846 12:13:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:48.846 12:13:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:48.846 12:13:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:48.846 12:13:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:48.846 12:13:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:48.846 12:13:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:48.846 12:13:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:48.846 12:13:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:15:49.107 /dev/nbd13 00:15:49.107 12:13:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:15:49.107 12:13:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:15:49.107 12:13:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:15:49.107 12:13:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:49.107 12:13:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:49.107 12:13:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:49.107 12:13:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:15:49.107 12:13:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:49.107 12:13:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:49.107 12:13:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:49.107 12:13:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:49.107 1+0 records in 00:15:49.107 1+0 records out 00:15:49.107 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00100229 s, 4.1 MB/s 00:15:49.107 12:13:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:49.107 12:13:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:49.107 12:13:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:49.107 12:13:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:49.107 12:13:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:49.107 12:13:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:49.107 12:13:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:49.107 12:13:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:49.107 12:13:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:49.107 12:13:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:49.369 12:13:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:49.369 { 00:15:49.369 "nbd_device": "/dev/nbd0", 00:15:49.369 "bdev_name": "nvme0n1" 00:15:49.369 }, 00:15:49.369 { 00:15:49.369 "nbd_device": "/dev/nbd1", 00:15:49.369 "bdev_name": "nvme0n2" 00:15:49.369 }, 00:15:49.369 { 00:15:49.369 "nbd_device": "/dev/nbd10", 00:15:49.369 "bdev_name": "nvme0n3" 00:15:49.369 }, 00:15:49.369 { 00:15:49.369 "nbd_device": "/dev/nbd11", 00:15:49.370 "bdev_name": "nvme1n1" 00:15:49.370 }, 00:15:49.370 { 00:15:49.370 "nbd_device": "/dev/nbd12", 00:15:49.370 "bdev_name": "nvme2n1" 00:15:49.370 }, 00:15:49.370 { 00:15:49.370 "nbd_device": "/dev/nbd13", 00:15:49.370 "bdev_name": "nvme3n1" 00:15:49.370 } 00:15:49.370 ]' 00:15:49.370 12:13:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:49.370 { 00:15:49.370 "nbd_device": "/dev/nbd0", 00:15:49.370 "bdev_name": "nvme0n1" 00:15:49.370 }, 00:15:49.370 { 00:15:49.370 "nbd_device": "/dev/nbd1", 00:15:49.370 "bdev_name": "nvme0n2" 00:15:49.370 }, 00:15:49.370 { 00:15:49.370 "nbd_device": "/dev/nbd10", 00:15:49.370 "bdev_name": "nvme0n3" 00:15:49.370 }, 00:15:49.370 { 00:15:49.370 "nbd_device": "/dev/nbd11", 00:15:49.370 "bdev_name": "nvme1n1" 00:15:49.370 }, 00:15:49.370 { 00:15:49.370 "nbd_device": "/dev/nbd12", 00:15:49.370 "bdev_name": "nvme2n1" 00:15:49.370 }, 00:15:49.370 { 00:15:49.370 "nbd_device": "/dev/nbd13", 00:15:49.370 "bdev_name": "nvme3n1" 00:15:49.370 } 00:15:49.370 ]' 00:15:49.370 12:13:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:49.370 12:13:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:15:49.370 /dev/nbd1 00:15:49.370 /dev/nbd10 00:15:49.370 /dev/nbd11 00:15:49.370 /dev/nbd12 00:15:49.370 /dev/nbd13' 00:15:49.370 12:13:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:15:49.370 /dev/nbd1 00:15:49.370 /dev/nbd10 00:15:49.370 /dev/nbd11 00:15:49.370 /dev/nbd12 00:15:49.370 /dev/nbd13' 00:15:49.370 12:13:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:49.370 12:13:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:15:49.370 12:13:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:15:49.370 12:13:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:15:49.370 12:13:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:15:49.370 12:13:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:15:49.370 12:13:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:49.370 12:13:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:49.370 12:13:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:49.370 12:13:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:49.370 12:13:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:49.370 12:13:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:15:49.632 256+0 records in 00:15:49.632 256+0 records out 00:15:49.632 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00720219 s, 146 MB/s 00:15:49.632 12:13:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:49.632 12:13:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:15:49.632 256+0 records in 00:15:49.632 256+0 records out 00:15:49.632 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.216801 s, 4.8 MB/s 00:15:49.632 12:13:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:49.632 12:13:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:15:49.893 256+0 records in 00:15:49.893 256+0 records out 00:15:49.893 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.208454 s, 5.0 MB/s 00:15:49.894 12:13:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:49.894 12:13:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:15:50.153 256+0 records in 00:15:50.153 256+0 records out 00:15:50.153 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.252241 s, 4.2 MB/s 00:15:50.153 12:13:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:50.153 12:13:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:15:50.413 256+0 records in 00:15:50.413 256+0 records out 00:15:50.413 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.290811 s, 3.6 MB/s 00:15:50.413 12:13:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:50.413 12:13:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:15:50.673 256+0 records in 00:15:50.673 256+0 records out 00:15:50.673 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.26048 s, 4.0 MB/s 00:15:50.673 12:13:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:50.673 12:13:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:15:50.932 256+0 records in 00:15:50.932 256+0 records out 00:15:50.932 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.226188 s, 4.6 MB/s 00:15:50.932 12:13:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:15:50.932 12:13:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:50.932 12:13:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:50.932 12:13:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:50.932 12:13:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:50.932 12:13:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:50.932 12:13:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:50.932 12:13:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:50.932 12:13:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:15:50.932 12:13:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:50.932 12:13:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:15:50.932 12:13:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:50.933 12:13:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:15:50.933 12:13:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:50.933 12:13:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:15:50.933 12:13:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:50.933 12:13:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:15:50.933 12:13:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:50.933 12:13:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:15:50.933 12:13:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:50.933 12:13:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:50.933 12:13:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:50.933 12:13:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:50.933 12:13:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:50.933 12:13:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:50.933 12:13:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:50.933 12:13:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:51.192 12:13:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:51.192 12:13:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:51.192 12:13:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:51.193 12:13:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:51.193 12:13:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:51.193 12:13:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:51.193 12:13:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:51.193 12:13:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:51.193 12:13:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:51.193 12:13:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:51.452 12:13:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:51.452 12:13:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:51.452 12:13:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:51.452 12:13:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:51.452 12:13:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:51.452 12:13:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:51.452 12:13:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:51.452 12:13:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:51.452 12:13:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:51.452 12:13:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:15:51.712 12:13:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:15:51.712 12:13:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:15:51.712 12:13:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:15:51.712 12:13:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:51.712 12:13:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:51.712 12:13:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:15:51.712 12:13:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:51.712 12:13:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:51.712 12:13:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:51.712 12:13:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:15:51.972 12:13:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:15:51.972 12:13:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:15:51.972 12:13:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:15:51.972 12:13:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:51.972 12:13:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:51.972 12:13:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:15:51.972 12:13:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:51.972 12:13:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:51.972 12:13:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:51.972 12:13:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:15:52.233 12:13:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:15:52.233 12:13:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:15:52.233 12:13:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:15:52.233 12:13:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:52.233 12:13:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:52.233 12:13:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:15:52.233 12:13:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:52.233 12:13:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:52.233 12:13:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:52.233 12:13:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:15:52.233 12:13:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:15:52.233 12:13:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:15:52.233 12:13:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:15:52.233 12:13:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:52.233 12:13:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:52.233 12:13:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:15:52.233 12:13:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:52.233 12:13:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:52.233 12:13:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:52.233 12:13:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:52.233 12:13:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:52.493 12:13:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:52.493 12:13:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:52.493 12:13:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:52.493 12:13:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:52.493 12:13:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:52.493 12:13:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:52.493 12:13:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:52.493 12:13:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:52.493 12:13:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:52.493 12:13:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:15:52.493 12:13:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:52.493 12:13:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:15:52.493 12:13:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:52.493 12:13:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:52.493 12:13:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:15:52.493 12:13:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:15:52.754 malloc_lvol_verify 00:15:52.754 12:13:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:15:53.013 5c525218-b463-4643-a5ad-161f347f812f 00:15:53.013 12:13:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:15:53.274 162afeb7-4b8c-4f79-aaad-75ee26385eee 00:15:53.274 12:13:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:15:53.535 /dev/nbd0 00:15:53.536 12:13:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:15:53.536 12:13:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:15:53.536 12:13:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:15:53.536 12:13:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:15:53.536 12:13:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:15:53.536 mke2fs 1.47.0 (5-Feb-2023) 00:15:53.536 Discarding device blocks: 0/4096 done 00:15:53.536 Creating filesystem with 4096 1k blocks and 1024 inodes 00:15:53.536 00:15:53.536 Allocating group tables: 0/1 done 00:15:53.536 Writing inode tables: 0/1 done 00:15:53.536 Creating journal (1024 blocks): done 00:15:53.536 Writing superblocks and filesystem accounting information: 0/1 done 00:15:53.536 00:15:53.536 12:13:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:53.536 12:13:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:53.536 12:13:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:53.536 12:13:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:53.536 12:13:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:53.536 12:13:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:53.536 12:13:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:53.796 12:13:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:53.796 12:13:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:53.796 12:13:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:53.796 12:13:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:53.796 12:13:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:53.796 12:13:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:53.796 12:13:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:53.796 12:13:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:53.796 12:13:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 72587 00:15:53.796 12:13:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 72587 ']' 00:15:53.796 12:13:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 72587 00:15:53.796 12:13:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:15:53.796 12:13:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:53.796 12:13:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72587 00:15:53.796 12:13:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:53.796 12:13:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:53.796 12:13:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72587' 00:15:53.796 killing process with pid 72587 00:15:53.796 12:13:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 72587 00:15:53.796 12:13:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 72587 00:15:54.739 12:13:55 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:15:54.739 00:15:54.739 real 0m11.005s 00:15:54.739 user 0m14.795s 00:15:54.739 sys 0m3.749s 00:15:54.739 12:13:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:54.739 12:13:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:54.739 ************************************ 00:15:54.739 END TEST bdev_nbd 00:15:54.739 ************************************ 00:15:54.739 12:13:55 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:15:54.739 12:13:55 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:15:54.739 12:13:55 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:15:54.739 12:13:55 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:15:54.739 12:13:55 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:54.739 12:13:55 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:54.739 12:13:55 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:54.739 ************************************ 00:15:54.739 START TEST bdev_fio 00:15:54.739 ************************************ 00:15:54.739 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:15:54.739 12:13:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:15:54.739 12:13:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:15:54.739 12:13:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:15:54.739 12:13:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:15:54.739 12:13:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:15:54.739 12:13:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:15:54.739 12:13:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:15:54.739 12:13:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:15:54.739 12:13:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:54.739 12:13:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:15:54.739 12:13:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:15:54.739 12:13:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:15:54.739 12:13:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:15:54.739 12:13:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:54.739 12:13:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:15:54.739 12:13:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:15:54.739 12:13:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:54.739 12:13:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:15:54.739 12:13:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:15:54.739 12:13:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:15:54.739 12:13:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:15:54.739 12:13:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:15:54.739 12:13:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:15:54.739 12:13:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:15:54.739 12:13:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:54.739 12:13:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:15:54.739 12:13:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:15:54.739 12:13:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:54.739 12:13:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:15:54.739 12:13:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:15:54.739 12:13:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:54.739 12:13:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:15:54.739 12:13:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:15:54.739 12:13:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:54.740 12:13:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:15:54.740 12:13:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:15:54.740 12:13:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:54.740 12:13:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:15:54.740 12:13:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:15:54.740 12:13:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:54.740 12:13:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:15:54.740 12:13:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:15:54.740 12:13:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:15:54.740 12:13:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:54.740 12:13:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:15:54.740 12:13:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:54.740 12:13:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:54.740 ************************************ 00:15:54.740 START TEST bdev_fio_rw_verify 00:15:54.740 ************************************ 00:15:54.740 12:13:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:54.740 12:13:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:54.740 12:13:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:54.740 12:13:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:54.740 12:13:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:54.740 12:13:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:54.740 12:13:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:15:54.740 12:13:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:54.740 12:13:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:54.740 12:13:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:15:54.740 12:13:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:54.740 12:13:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:54.740 12:13:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:54.740 12:13:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:54.740 12:13:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:15:54.740 12:13:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:54.740 12:13:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:54.740 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:54.740 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:54.740 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:54.740 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:54.740 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:54.740 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:54.740 fio-3.35 00:15:54.740 Starting 6 threads 00:16:07.031 00:16:07.031 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=72992: Mon Nov 25 12:14:06 2024 00:16:07.031 read: IOPS=41.3k, BW=161MiB/s (169MB/s)(1613MiB/10001msec) 00:16:07.031 slat (usec): min=2, max=2735, avg= 4.59, stdev= 5.98 00:16:07.031 clat (usec): min=82, max=393902, avg=421.92, stdev=1884.92 00:16:07.031 lat (usec): min=86, max=393905, avg=426.52, stdev=1884.99 00:16:07.031 clat percentiles (usec): 00:16:07.031 | 50.000th=[ 355], 99.000th=[ 1762], 99.900th=[ 3425], 00:16:07.031 | 99.990th=[ 5866], 99.999th=[392168] 00:16:07.031 write: IOPS=41.5k, BW=162MiB/s (170MB/s)(1623MiB/10001msec); 0 zone resets 00:16:07.031 slat (usec): min=12, max=3004, avg=21.30, stdev=37.06 00:16:07.031 clat (usec): min=58, max=7691, avg=522.39, stdev=371.72 00:16:07.031 lat (usec): min=83, max=8438, avg=543.70, stdev=377.17 00:16:07.031 clat percentiles (usec): 00:16:07.031 | 50.000th=[ 449], 99.000th=[ 2212], 99.900th=[ 4047], 99.990th=[ 5669], 00:16:07.031 | 99.999th=[ 7242] 00:16:07.031 bw ( KiB/s): min=59886, max=213472, per=100.00%, avg=166205.20, stdev=6591.80, samples=113 00:16:07.031 iops : min=14970, max=53368, avg=41550.68, stdev=1647.98, samples=113 00:16:07.031 lat (usec) : 100=0.06%, 250=18.63%, 500=49.67%, 750=22.31%, 1000=5.21% 00:16:07.031 lat (msec) : 2=3.11%, 4=0.95%, 10=0.07%, 50=0.01%, 250=0.01% 00:16:07.031 lat (msec) : 500=0.01% 00:16:07.031 cpu : usr=51.41%, sys=31.16%, ctx=9268, majf=0, minf=33070 00:16:07.031 IO depths : 1=12.1%, 2=24.5%, 4=50.4%, 8=12.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:07.031 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:07.031 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:07.031 issued rwts: total=412832,415369,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:07.031 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:07.031 00:16:07.031 Run status group 0 (all jobs): 00:16:07.031 READ: bw=161MiB/s (169MB/s), 161MiB/s-161MiB/s (169MB/s-169MB/s), io=1613MiB (1691MB), run=10001-10001msec 00:16:07.031 WRITE: bw=162MiB/s (170MB/s), 162MiB/s-162MiB/s (170MB/s-170MB/s), io=1623MiB (1701MB), run=10001-10001msec 00:16:07.031 ----------------------------------------------------- 00:16:07.031 Suppressions used: 00:16:07.031 count bytes template 00:16:07.031 6 48 /usr/src/fio/parse.c 00:16:07.031 2224 213504 /usr/src/fio/iolog.c 00:16:07.031 1 8 libtcmalloc_minimal.so 00:16:07.031 1 904 libcrypto.so 00:16:07.031 ----------------------------------------------------- 00:16:07.031 00:16:07.031 00:16:07.031 real 0m11.917s 00:16:07.031 user 0m32.289s 00:16:07.031 sys 0m19.000s 00:16:07.031 12:14:07 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:07.031 12:14:07 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:16:07.031 ************************************ 00:16:07.031 END TEST bdev_fio_rw_verify 00:16:07.031 ************************************ 00:16:07.031 12:14:07 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:16:07.031 12:14:07 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:07.031 12:14:07 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:16:07.031 12:14:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:07.031 12:14:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:16:07.032 12:14:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:16:07.032 12:14:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:16:07.032 12:14:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:16:07.032 12:14:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:16:07.032 12:14:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:16:07.032 12:14:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:16:07.032 12:14:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:07.032 12:14:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:16:07.032 12:14:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:16:07.032 12:14:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:16:07.032 12:14:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:16:07.032 12:14:07 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:16:07.032 12:14:07 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "83118c04-399f-4e00-bcef-93e29f8b2048"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "83118c04-399f-4e00-bcef-93e29f8b2048",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "969f5240-e562-410c-abf0-67573b6fdbce"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "969f5240-e562-410c-abf0-67573b6fdbce",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "bdb1ab46-1d71-4916-ac4c-3b4ca93a479c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "bdb1ab46-1d71-4916-ac4c-3b4ca93a479c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "67f02596-c2dc-426c-bdf1-6b2ab1558c43"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "67f02596-c2dc-426c-bdf1-6b2ab1558c43",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "bc9d56f7-6ecc-445b-a356-a5aa5268c183"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "bc9d56f7-6ecc-445b-a356-a5aa5268c183",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "18589ab9-44d4-46fe-809c-e842035ff257"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "18589ab9-44d4-46fe-809c-e842035ff257",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:16:07.032 12:14:07 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:16:07.032 12:14:07 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:07.032 12:14:07 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:16:07.032 /home/vagrant/spdk_repo/spdk 00:16:07.032 12:14:07 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:16:07.032 12:14:07 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:16:07.032 00:16:07.032 real 0m12.075s 00:16:07.032 user 0m32.359s 00:16:07.032 sys 0m19.082s 00:16:07.032 12:14:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:07.032 12:14:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:16:07.032 ************************************ 00:16:07.032 END TEST bdev_fio 00:16:07.032 ************************************ 00:16:07.032 12:14:07 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:07.032 12:14:07 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:07.032 12:14:07 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:16:07.032 12:14:07 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:07.032 12:14:07 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:07.032 ************************************ 00:16:07.032 START TEST bdev_verify 00:16:07.032 ************************************ 00:16:07.032 12:14:07 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:07.032 [2024-11-25 12:14:07.704084] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:16:07.032 [2024-11-25 12:14:07.704195] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73167 ] 00:16:07.032 [2024-11-25 12:14:07.864818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:07.032 [2024-11-25 12:14:07.966109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:07.032 [2024-11-25 12:14:07.966242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:07.290 Running I/O for 5 seconds... 00:16:09.595 23008.00 IOPS, 89.88 MiB/s [2024-11-25T12:14:11.609Z] 24560.00 IOPS, 95.94 MiB/s [2024-11-25T12:14:12.987Z] 24096.00 IOPS, 94.12 MiB/s [2024-11-25T12:14:13.550Z] 24240.00 IOPS, 94.69 MiB/s [2024-11-25T12:14:13.550Z] 24320.00 IOPS, 95.00 MiB/s 00:16:12.470 Latency(us) 00:16:12.470 [2024-11-25T12:14:13.550Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:12.470 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:12.470 Verification LBA range: start 0x0 length 0x80000 00:16:12.470 nvme0n1 : 5.04 1728.04 6.75 0.00 0.00 73927.42 13107.20 72997.02 00:16:12.470 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:12.470 Verification LBA range: start 0x80000 length 0x80000 00:16:12.470 nvme0n1 : 5.03 1755.74 6.86 0.00 0.00 72757.13 12603.08 72190.42 00:16:12.470 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:12.470 Verification LBA range: start 0x0 length 0x80000 00:16:12.470 nvme0n2 : 5.04 1727.53 6.75 0.00 0.00 73789.12 18350.08 63317.86 00:16:12.470 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:12.470 Verification LBA range: start 0x80000 length 0x80000 00:16:12.470 nvme0n2 : 5.03 1755.24 6.86 0.00 0.00 72633.39 14821.22 62511.26 00:16:12.470 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:12.470 Verification LBA range: start 0x0 length 0x80000 00:16:12.470 nvme0n3 : 5.07 1743.08 6.81 0.00 0.00 72969.32 8771.74 67754.14 00:16:12.470 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:12.470 Verification LBA range: start 0x80000 length 0x80000 00:16:12.470 nvme0n3 : 5.06 1770.14 6.91 0.00 0.00 71870.75 10284.11 61704.66 00:16:12.470 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:12.470 Verification LBA range: start 0x0 length 0xbd0bd 00:16:12.470 nvme1n1 : 5.07 3187.44 12.45 0.00 0.00 39737.07 4209.43 66544.25 00:16:12.470 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:12.470 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:16:12.470 nvme1n1 : 5.06 3330.97 13.01 0.00 0.00 38040.47 3150.77 60091.47 00:16:12.470 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:12.470 Verification LBA range: start 0x0 length 0x20000 00:16:12.470 nvme2n1 : 5.07 1740.93 6.80 0.00 0.00 72635.44 7914.73 61301.37 00:16:12.470 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:12.470 Verification LBA range: start 0x20000 length 0x20000 00:16:12.470 nvme2n1 : 5.07 1794.08 7.01 0.00 0.00 70520.43 6956.90 65737.65 00:16:12.470 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:12.470 Verification LBA range: start 0x0 length 0xa0000 00:16:12.470 nvme3n1 : 5.07 1740.44 6.80 0.00 0.00 72499.38 8570.09 66947.54 00:16:12.470 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:12.470 Verification LBA range: start 0xa0000 length 0xa0000 00:16:12.470 nvme3n1 : 5.07 1768.26 6.91 0.00 0.00 71389.37 7813.91 68560.74 00:16:12.470 [2024-11-25T12:14:13.550Z] =================================================================================================================== 00:16:12.470 [2024-11-25T12:14:13.550Z] Total : 24041.90 93.91 0.00 0.00 63357.53 3150.77 72997.02 00:16:13.401 00:16:13.401 real 0m6.525s 00:16:13.401 user 0m10.423s 00:16:13.401 sys 0m1.683s 00:16:13.401 12:14:14 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:13.401 ************************************ 00:16:13.401 END TEST bdev_verify 00:16:13.401 ************************************ 00:16:13.401 12:14:14 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:16:13.401 12:14:14 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:13.401 12:14:14 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:16:13.401 12:14:14 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:13.401 12:14:14 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:13.401 ************************************ 00:16:13.401 START TEST bdev_verify_big_io 00:16:13.401 ************************************ 00:16:13.401 12:14:14 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:13.401 [2024-11-25 12:14:14.263439] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:16:13.401 [2024-11-25 12:14:14.263557] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73266 ] 00:16:13.401 [2024-11-25 12:14:14.424078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:13.659 [2024-11-25 12:14:14.525524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:13.659 [2024-11-25 12:14:14.525671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:13.917 Running I/O for 5 seconds... 00:16:19.996 1216.00 IOPS, 76.00 MiB/s [2024-11-25T12:14:21.076Z] 3221.50 IOPS, 201.34 MiB/s 00:16:19.996 Latency(us) 00:16:19.996 [2024-11-25T12:14:21.076Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:19.996 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:19.996 Verification LBA range: start 0x0 length 0x8000 00:16:19.996 nvme0n1 : 5.92 105.45 6.59 0.00 0.00 1162733.40 4789.17 2129415.88 00:16:19.996 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:19.996 Verification LBA range: start 0x8000 length 0x8000 00:16:19.996 nvme0n1 : 5.91 83.92 5.25 0.00 0.00 1471562.91 250045.05 2400432.44 00:16:19.996 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:19.996 Verification LBA range: start 0x0 length 0x8000 00:16:19.996 nvme0n2 : 5.95 129.02 8.06 0.00 0.00 933283.58 104051.00 903388.55 00:16:19.996 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:19.996 Verification LBA range: start 0x8000 length 0x8000 00:16:19.996 nvme0n2 : 5.80 132.34 8.27 0.00 0.00 906569.78 80659.69 974369.08 00:16:19.996 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:19.996 Verification LBA range: start 0x0 length 0x8000 00:16:19.996 nvme0n3 : 5.95 104.79 6.55 0.00 0.00 1117453.11 34482.02 2477865.75 00:16:19.996 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:19.996 Verification LBA range: start 0x8000 length 0x8000 00:16:19.996 nvme0n3 : 5.81 85.42 5.34 0.00 0.00 1366623.18 137121.48 2413337.99 00:16:19.996 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:19.996 Verification LBA range: start 0x0 length 0xbd0b 00:16:19.996 nvme1n1 : 5.92 154.01 9.63 0.00 0.00 734709.41 59688.17 767880.27 00:16:19.996 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:19.996 Verification LBA range: start 0xbd0b length 0xbd0b 00:16:19.996 nvme1n1 : 5.81 165.10 10.32 0.00 0.00 691836.61 8670.92 922746.88 00:16:19.996 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:19.996 Verification LBA range: start 0x0 length 0x2000 00:16:19.996 nvme2n1 : 5.96 96.65 6.04 0.00 0.00 1138626.38 33877.07 2606921.26 00:16:19.996 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:19.996 Verification LBA range: start 0x2000 length 0x2000 00:16:19.996 nvme2n1 : 5.93 137.70 8.61 0.00 0.00 806123.63 4940.41 871124.68 00:16:19.996 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:19.996 Verification LBA range: start 0x0 length 0xa000 00:16:19.996 nvme3n1 : 5.96 112.66 7.04 0.00 0.00 941284.56 2772.68 2374621.34 00:16:19.996 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:19.996 Verification LBA range: start 0xa000 length 0xa000 00:16:19.996 nvme3n1 : 5.92 126.97 7.94 0.00 0.00 838890.03 4133.81 1264743.98 00:16:19.996 [2024-11-25T12:14:21.076Z] =================================================================================================================== 00:16:19.996 [2024-11-25T12:14:21.076Z] Total : 1434.04 89.63 0.00 0.00 963473.11 2772.68 2606921.26 00:16:20.932 00:16:20.932 real 0m7.647s 00:16:20.932 user 0m14.168s 00:16:20.933 sys 0m0.378s 00:16:20.933 12:14:21 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:20.933 12:14:21 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:16:20.933 ************************************ 00:16:20.933 END TEST bdev_verify_big_io 00:16:20.933 ************************************ 00:16:20.933 12:14:21 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:20.933 12:14:21 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:20.933 12:14:21 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:20.933 12:14:21 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:20.933 ************************************ 00:16:20.933 START TEST bdev_write_zeroes 00:16:20.933 ************************************ 00:16:20.933 12:14:21 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:20.933 [2024-11-25 12:14:21.963270] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:16:20.933 [2024-11-25 12:14:21.963386] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73373 ] 00:16:21.190 [2024-11-25 12:14:22.124280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:21.190 [2024-11-25 12:14:22.226892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:21.756 Running I/O for 1 seconds... 00:16:22.687 77120.00 IOPS, 301.25 MiB/s 00:16:22.687 Latency(us) 00:16:22.687 [2024-11-25T12:14:23.768Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:22.688 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:22.688 nvme0n1 : 1.02 11012.46 43.02 0.00 0.00 11610.43 4940.41 26416.05 00:16:22.688 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:22.688 nvme0n2 : 1.03 10988.97 42.93 0.00 0.00 11627.59 5016.02 27827.59 00:16:22.688 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:22.688 nvme0n3 : 1.03 10966.07 42.84 0.00 0.00 11643.84 5016.02 29037.49 00:16:22.688 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:22.688 nvme1n1 : 1.03 20971.54 81.92 0.00 0.00 6071.49 3755.72 19559.98 00:16:22.688 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:22.688 nvme2n1 : 1.03 10911.13 42.62 0.00 0.00 11638.35 6604.01 30045.74 00:16:22.688 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:22.688 nvme3n1 : 1.03 10898.21 42.57 0.00 0.00 11643.86 6654.42 30449.03 00:16:22.688 [2024-11-25T12:14:23.768Z] =================================================================================================================== 00:16:22.688 [2024-11-25T12:14:23.768Z] Total : 75748.39 295.89 0.00 0.00 10086.23 3755.72 30449.03 00:16:23.622 00:16:23.622 real 0m2.451s 00:16:23.622 user 0m1.710s 00:16:23.622 sys 0m0.577s 00:16:23.622 12:14:24 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:23.622 ************************************ 00:16:23.622 12:14:24 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:16:23.622 END TEST bdev_write_zeroes 00:16:23.622 ************************************ 00:16:23.622 12:14:24 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:23.622 12:14:24 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:23.622 12:14:24 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:23.622 12:14:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:23.622 ************************************ 00:16:23.622 START TEST bdev_json_nonenclosed 00:16:23.622 ************************************ 00:16:23.622 12:14:24 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:23.622 [2024-11-25 12:14:24.453082] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:16:23.622 [2024-11-25 12:14:24.453211] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73423 ] 00:16:23.622 [2024-11-25 12:14:24.614752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:23.881 [2024-11-25 12:14:24.716091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:23.881 [2024-11-25 12:14:24.716172] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:16:23.881 [2024-11-25 12:14:24.716188] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:23.881 [2024-11-25 12:14:24.716197] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:23.881 00:16:23.881 real 0m0.505s 00:16:23.881 user 0m0.314s 00:16:23.881 sys 0m0.086s 00:16:23.881 12:14:24 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:23.881 12:14:24 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:16:23.881 ************************************ 00:16:23.881 END TEST bdev_json_nonenclosed 00:16:23.881 ************************************ 00:16:23.881 12:14:24 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:23.881 12:14:24 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:23.881 12:14:24 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:23.881 12:14:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:23.881 ************************************ 00:16:23.881 START TEST bdev_json_nonarray 00:16:23.881 ************************************ 00:16:23.881 12:14:24 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:24.139 [2024-11-25 12:14:25.010523] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:16:24.139 [2024-11-25 12:14:25.010646] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73450 ] 00:16:24.139 [2024-11-25 12:14:25.172376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:24.397 [2024-11-25 12:14:25.272670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.397 [2024-11-25 12:14:25.272765] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:16:24.397 [2024-11-25 12:14:25.272782] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:24.397 [2024-11-25 12:14:25.272791] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:24.397 00:16:24.397 real 0m0.531s 00:16:24.397 user 0m0.325s 00:16:24.397 sys 0m0.102s 00:16:24.397 12:14:25 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:24.397 12:14:25 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:16:24.397 ************************************ 00:16:24.397 END TEST bdev_json_nonarray 00:16:24.397 ************************************ 00:16:24.655 12:14:25 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:16:24.655 12:14:25 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:16:24.655 12:14:25 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:16:24.655 12:14:25 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:16:24.655 12:14:25 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:16:24.655 12:14:25 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:16:24.655 12:14:25 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:24.655 12:14:25 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:16:24.655 12:14:25 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:16:24.655 12:14:25 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:16:24.655 12:14:25 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:16:24.655 12:14:25 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:24.912 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:11.572 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:17:11.572 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:11.572 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:11.572 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:17:11.572 00:17:11.572 real 1m30.035s 00:17:11.572 user 1m27.923s 00:17:11.572 sys 1m39.367s 00:17:11.572 12:15:06 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:11.572 12:15:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:11.572 ************************************ 00:17:11.572 END TEST blockdev_xnvme 00:17:11.572 ************************************ 00:17:11.572 12:15:06 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:17:11.572 12:15:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:11.572 12:15:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:11.572 12:15:06 -- common/autotest_common.sh@10 -- # set +x 00:17:11.572 ************************************ 00:17:11.572 START TEST ublk 00:17:11.572 ************************************ 00:17:11.572 12:15:06 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:17:11.572 * Looking for test storage... 00:17:11.572 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:17:11.572 12:15:06 ublk -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:11.572 12:15:06 ublk -- common/autotest_common.sh@1693 -- # lcov --version 00:17:11.572 12:15:06 ublk -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:11.572 12:15:06 ublk -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:11.572 12:15:06 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:11.572 12:15:06 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:11.572 12:15:06 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:11.572 12:15:06 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:17:11.572 12:15:06 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:17:11.572 12:15:06 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:17:11.572 12:15:06 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:17:11.572 12:15:06 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:17:11.572 12:15:06 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:17:11.572 12:15:06 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:17:11.572 12:15:06 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:11.572 12:15:06 ublk -- scripts/common.sh@344 -- # case "$op" in 00:17:11.572 12:15:06 ublk -- scripts/common.sh@345 -- # : 1 00:17:11.572 12:15:06 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:11.572 12:15:06 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:11.572 12:15:06 ublk -- scripts/common.sh@365 -- # decimal 1 00:17:11.572 12:15:06 ublk -- scripts/common.sh@353 -- # local d=1 00:17:11.572 12:15:06 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:11.572 12:15:06 ublk -- scripts/common.sh@355 -- # echo 1 00:17:11.572 12:15:06 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:17:11.572 12:15:06 ublk -- scripts/common.sh@366 -- # decimal 2 00:17:11.572 12:15:06 ublk -- scripts/common.sh@353 -- # local d=2 00:17:11.572 12:15:06 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:11.572 12:15:06 ublk -- scripts/common.sh@355 -- # echo 2 00:17:11.572 12:15:06 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:17:11.572 12:15:06 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:11.572 12:15:06 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:11.572 12:15:06 ublk -- scripts/common.sh@368 -- # return 0 00:17:11.573 12:15:06 ublk -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:11.573 12:15:06 ublk -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:11.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:11.573 --rc genhtml_branch_coverage=1 00:17:11.573 --rc genhtml_function_coverage=1 00:17:11.573 --rc genhtml_legend=1 00:17:11.573 --rc geninfo_all_blocks=1 00:17:11.573 --rc geninfo_unexecuted_blocks=1 00:17:11.573 00:17:11.573 ' 00:17:11.573 12:15:06 ublk -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:11.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:11.573 --rc genhtml_branch_coverage=1 00:17:11.573 --rc genhtml_function_coverage=1 00:17:11.573 --rc genhtml_legend=1 00:17:11.573 --rc geninfo_all_blocks=1 00:17:11.573 --rc geninfo_unexecuted_blocks=1 00:17:11.573 00:17:11.573 ' 00:17:11.573 12:15:06 ublk -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:11.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:11.573 --rc genhtml_branch_coverage=1 00:17:11.573 --rc genhtml_function_coverage=1 00:17:11.573 --rc genhtml_legend=1 00:17:11.573 --rc geninfo_all_blocks=1 00:17:11.573 --rc geninfo_unexecuted_blocks=1 00:17:11.573 00:17:11.573 ' 00:17:11.573 12:15:06 ublk -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:11.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:11.573 --rc genhtml_branch_coverage=1 00:17:11.573 --rc genhtml_function_coverage=1 00:17:11.573 --rc genhtml_legend=1 00:17:11.573 --rc geninfo_all_blocks=1 00:17:11.573 --rc geninfo_unexecuted_blocks=1 00:17:11.573 00:17:11.573 ' 00:17:11.573 12:15:06 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:17:11.573 12:15:06 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:17:11.573 12:15:06 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:17:11.573 12:15:06 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:17:11.573 12:15:06 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:17:11.573 12:15:06 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:17:11.573 12:15:06 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:17:11.573 12:15:06 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:17:11.573 12:15:06 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:17:11.573 12:15:06 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:17:11.573 12:15:06 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:17:11.573 12:15:06 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:17:11.573 12:15:06 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:17:11.573 12:15:06 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:17:11.573 12:15:06 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:17:11.573 12:15:06 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:17:11.573 12:15:06 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:17:11.573 12:15:06 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:17:11.573 12:15:06 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:17:11.573 12:15:06 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:17:11.573 12:15:06 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:11.573 12:15:06 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:11.573 12:15:06 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:11.573 ************************************ 00:17:11.573 START TEST test_save_ublk_config 00:17:11.573 ************************************ 00:17:11.573 12:15:06 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:17:11.573 12:15:06 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:17:11.573 12:15:06 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=73768 00:17:11.573 12:15:06 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:17:11.573 12:15:06 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 73768 00:17:11.573 12:15:06 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:17:11.573 12:15:06 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 73768 ']' 00:17:11.573 12:15:06 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:11.573 12:15:06 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:11.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:11.573 12:15:06 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:11.573 12:15:06 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:11.573 12:15:06 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:11.573 [2024-11-25 12:15:06.777420] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:17:11.573 [2024-11-25 12:15:06.777990] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73768 ] 00:17:11.573 [2024-11-25 12:15:06.947374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.573 [2024-11-25 12:15:07.051086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:11.573 12:15:07 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:11.573 12:15:07 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:17:11.573 12:15:07 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:17:11.573 12:15:07 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:17:11.573 12:15:07 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.573 12:15:07 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:11.573 [2024-11-25 12:15:07.683970] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:11.573 [2024-11-25 12:15:07.684795] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:11.573 malloc0 00:17:11.573 [2024-11-25 12:15:07.748099] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:17:11.573 [2024-11-25 12:15:07.748185] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:17:11.573 [2024-11-25 12:15:07.748195] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:17:11.573 [2024-11-25 12:15:07.748203] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:17:11.573 [2024-11-25 12:15:07.756108] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:11.573 [2024-11-25 12:15:07.756133] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:11.573 [2024-11-25 12:15:07.763978] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:11.573 [2024-11-25 12:15:07.764090] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:17:11.573 [2024-11-25 12:15:07.780976] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:17:11.573 0 00:17:11.573 12:15:07 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.573 12:15:07 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:17:11.573 12:15:07 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.573 12:15:07 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:11.573 12:15:08 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.573 12:15:08 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:17:11.573 "subsystems": [ 00:17:11.573 { 00:17:11.573 "subsystem": "fsdev", 00:17:11.573 "config": [ 00:17:11.573 { 00:17:11.573 "method": "fsdev_set_opts", 00:17:11.573 "params": { 00:17:11.573 "fsdev_io_pool_size": 65535, 00:17:11.573 "fsdev_io_cache_size": 256 00:17:11.573 } 00:17:11.573 } 00:17:11.573 ] 00:17:11.573 }, 00:17:11.573 { 00:17:11.573 "subsystem": "keyring", 00:17:11.573 "config": [] 00:17:11.573 }, 00:17:11.573 { 00:17:11.573 "subsystem": "iobuf", 00:17:11.573 "config": [ 00:17:11.573 { 00:17:11.573 "method": "iobuf_set_options", 00:17:11.573 "params": { 00:17:11.573 "small_pool_count": 8192, 00:17:11.573 "large_pool_count": 1024, 00:17:11.573 "small_bufsize": 8192, 00:17:11.573 "large_bufsize": 135168, 00:17:11.573 "enable_numa": false 00:17:11.573 } 00:17:11.573 } 00:17:11.573 ] 00:17:11.573 }, 00:17:11.573 { 00:17:11.573 "subsystem": "sock", 00:17:11.573 "config": [ 00:17:11.573 { 00:17:11.573 "method": "sock_set_default_impl", 00:17:11.573 "params": { 00:17:11.573 "impl_name": "posix" 00:17:11.573 } 00:17:11.573 }, 00:17:11.573 { 00:17:11.573 "method": "sock_impl_set_options", 00:17:11.573 "params": { 00:17:11.573 "impl_name": "ssl", 00:17:11.573 "recv_buf_size": 4096, 00:17:11.573 "send_buf_size": 4096, 00:17:11.573 "enable_recv_pipe": true, 00:17:11.573 "enable_quickack": false, 00:17:11.573 "enable_placement_id": 0, 00:17:11.573 "enable_zerocopy_send_server": true, 00:17:11.573 "enable_zerocopy_send_client": false, 00:17:11.573 "zerocopy_threshold": 0, 00:17:11.573 "tls_version": 0, 00:17:11.573 "enable_ktls": false 00:17:11.573 } 00:17:11.573 }, 00:17:11.573 { 00:17:11.573 "method": "sock_impl_set_options", 00:17:11.573 "params": { 00:17:11.573 "impl_name": "posix", 00:17:11.573 "recv_buf_size": 2097152, 00:17:11.573 "send_buf_size": 2097152, 00:17:11.573 "enable_recv_pipe": true, 00:17:11.573 "enable_quickack": false, 00:17:11.573 "enable_placement_id": 0, 00:17:11.573 "enable_zerocopy_send_server": true, 00:17:11.573 "enable_zerocopy_send_client": false, 00:17:11.573 "zerocopy_threshold": 0, 00:17:11.573 "tls_version": 0, 00:17:11.573 "enable_ktls": false 00:17:11.573 } 00:17:11.573 } 00:17:11.573 ] 00:17:11.573 }, 00:17:11.573 { 00:17:11.573 "subsystem": "vmd", 00:17:11.573 "config": [] 00:17:11.573 }, 00:17:11.573 { 00:17:11.573 "subsystem": "accel", 00:17:11.573 "config": [ 00:17:11.573 { 00:17:11.573 "method": "accel_set_options", 00:17:11.573 "params": { 00:17:11.573 "small_cache_size": 128, 00:17:11.573 "large_cache_size": 16, 00:17:11.573 "task_count": 2048, 00:17:11.574 "sequence_count": 2048, 00:17:11.574 "buf_count": 2048 00:17:11.574 } 00:17:11.574 } 00:17:11.574 ] 00:17:11.574 }, 00:17:11.574 { 00:17:11.574 "subsystem": "bdev", 00:17:11.574 "config": [ 00:17:11.574 { 00:17:11.574 "method": "bdev_set_options", 00:17:11.574 "params": { 00:17:11.574 "bdev_io_pool_size": 65535, 00:17:11.574 "bdev_io_cache_size": 256, 00:17:11.574 "bdev_auto_examine": true, 00:17:11.574 "iobuf_small_cache_size": 128, 00:17:11.574 "iobuf_large_cache_size": 16 00:17:11.574 } 00:17:11.574 }, 00:17:11.574 { 00:17:11.574 "method": "bdev_raid_set_options", 00:17:11.574 "params": { 00:17:11.574 "process_window_size_kb": 1024, 00:17:11.574 "process_max_bandwidth_mb_sec": 0 00:17:11.574 } 00:17:11.574 }, 00:17:11.574 { 00:17:11.574 "method": "bdev_iscsi_set_options", 00:17:11.574 "params": { 00:17:11.574 "timeout_sec": 30 00:17:11.574 } 00:17:11.574 }, 00:17:11.574 { 00:17:11.574 "method": "bdev_nvme_set_options", 00:17:11.574 "params": { 00:17:11.574 "action_on_timeout": "none", 00:17:11.574 "timeout_us": 0, 00:17:11.574 "timeout_admin_us": 0, 00:17:11.574 "keep_alive_timeout_ms": 10000, 00:17:11.574 "arbitration_burst": 0, 00:17:11.574 "low_priority_weight": 0, 00:17:11.574 "medium_priority_weight": 0, 00:17:11.574 "high_priority_weight": 0, 00:17:11.574 "nvme_adminq_poll_period_us": 10000, 00:17:11.574 "nvme_ioq_poll_period_us": 0, 00:17:11.574 "io_queue_requests": 0, 00:17:11.574 "delay_cmd_submit": true, 00:17:11.574 "transport_retry_count": 4, 00:17:11.574 "bdev_retry_count": 3, 00:17:11.574 "transport_ack_timeout": 0, 00:17:11.574 "ctrlr_loss_timeout_sec": 0, 00:17:11.574 "reconnect_delay_sec": 0, 00:17:11.574 "fast_io_fail_timeout_sec": 0, 00:17:11.574 "disable_auto_failback": false, 00:17:11.574 "generate_uuids": false, 00:17:11.574 "transport_tos": 0, 00:17:11.574 "nvme_error_stat": false, 00:17:11.574 "rdma_srq_size": 0, 00:17:11.574 "io_path_stat": false, 00:17:11.574 "allow_accel_sequence": false, 00:17:11.574 "rdma_max_cq_size": 0, 00:17:11.574 "rdma_cm_event_timeout_ms": 0, 00:17:11.574 "dhchap_digests": [ 00:17:11.574 "sha256", 00:17:11.574 "sha384", 00:17:11.574 "sha512" 00:17:11.574 ], 00:17:11.574 "dhchap_dhgroups": [ 00:17:11.574 "null", 00:17:11.574 "ffdhe2048", 00:17:11.574 "ffdhe3072", 00:17:11.574 "ffdhe4096", 00:17:11.574 "ffdhe6144", 00:17:11.574 "ffdhe8192" 00:17:11.574 ] 00:17:11.574 } 00:17:11.574 }, 00:17:11.574 { 00:17:11.574 "method": "bdev_nvme_set_hotplug", 00:17:11.574 "params": { 00:17:11.574 "period_us": 100000, 00:17:11.574 "enable": false 00:17:11.574 } 00:17:11.574 }, 00:17:11.574 { 00:17:11.574 "method": "bdev_malloc_create", 00:17:11.574 "params": { 00:17:11.574 "name": "malloc0", 00:17:11.574 "num_blocks": 8192, 00:17:11.574 "block_size": 4096, 00:17:11.574 "physical_block_size": 4096, 00:17:11.574 "uuid": "4da2aa59-eb4d-4c78-bb39-0198914aa6da", 00:17:11.574 "optimal_io_boundary": 0, 00:17:11.574 "md_size": 0, 00:17:11.574 "dif_type": 0, 00:17:11.574 "dif_is_head_of_md": false, 00:17:11.574 "dif_pi_format": 0 00:17:11.574 } 00:17:11.574 }, 00:17:11.574 { 00:17:11.574 "method": "bdev_wait_for_examine" 00:17:11.574 } 00:17:11.574 ] 00:17:11.574 }, 00:17:11.574 { 00:17:11.574 "subsystem": "scsi", 00:17:11.574 "config": null 00:17:11.574 }, 00:17:11.574 { 00:17:11.574 "subsystem": "scheduler", 00:17:11.574 "config": [ 00:17:11.574 { 00:17:11.574 "method": "framework_set_scheduler", 00:17:11.574 "params": { 00:17:11.574 "name": "static" 00:17:11.574 } 00:17:11.574 } 00:17:11.574 ] 00:17:11.574 }, 00:17:11.574 { 00:17:11.574 "subsystem": "vhost_scsi", 00:17:11.574 "config": [] 00:17:11.574 }, 00:17:11.574 { 00:17:11.574 "subsystem": "vhost_blk", 00:17:11.574 "config": [] 00:17:11.574 }, 00:17:11.574 { 00:17:11.574 "subsystem": "ublk", 00:17:11.574 "config": [ 00:17:11.574 { 00:17:11.574 "method": "ublk_create_target", 00:17:11.574 "params": { 00:17:11.574 "cpumask": "1" 00:17:11.574 } 00:17:11.574 }, 00:17:11.574 { 00:17:11.574 "method": "ublk_start_disk", 00:17:11.574 "params": { 00:17:11.574 "bdev_name": "malloc0", 00:17:11.574 "ublk_id": 0, 00:17:11.574 "num_queues": 1, 00:17:11.574 "queue_depth": 128 00:17:11.574 } 00:17:11.574 } 00:17:11.574 ] 00:17:11.574 }, 00:17:11.574 { 00:17:11.574 "subsystem": "nbd", 00:17:11.574 "config": [] 00:17:11.574 }, 00:17:11.574 { 00:17:11.574 "subsystem": "nvmf", 00:17:11.574 "config": [ 00:17:11.574 { 00:17:11.574 "method": "nvmf_set_config", 00:17:11.574 "params": { 00:17:11.574 "discovery_filter": "match_any", 00:17:11.574 "admin_cmd_passthru": { 00:17:11.574 "identify_ctrlr": false 00:17:11.574 }, 00:17:11.574 "dhchap_digests": [ 00:17:11.574 "sha256", 00:17:11.574 "sha384", 00:17:11.574 "sha512" 00:17:11.574 ], 00:17:11.574 "dhchap_dhgroups": [ 00:17:11.574 "null", 00:17:11.574 "ffdhe2048", 00:17:11.574 "ffdhe3072", 00:17:11.574 "ffdhe4096", 00:17:11.574 "ffdhe6144", 00:17:11.574 "ffdhe8192" 00:17:11.574 ] 00:17:11.574 } 00:17:11.574 }, 00:17:11.574 { 00:17:11.574 "method": "nvmf_set_max_subsystems", 00:17:11.574 "params": { 00:17:11.574 "max_subsystems": 1024 00:17:11.574 } 00:17:11.574 }, 00:17:11.574 { 00:17:11.574 "method": "nvmf_set_crdt", 00:17:11.574 "params": { 00:17:11.574 "crdt1": 0, 00:17:11.574 "crdt2": 0, 00:17:11.574 "crdt3": 0 00:17:11.574 } 00:17:11.574 } 00:17:11.574 ] 00:17:11.574 }, 00:17:11.574 { 00:17:11.574 "subsystem": "iscsi", 00:17:11.574 "config": [ 00:17:11.574 { 00:17:11.574 "method": "iscsi_set_options", 00:17:11.574 "params": { 00:17:11.574 "node_base": "iqn.2016-06.io.spdk", 00:17:11.574 "max_sessions": 128, 00:17:11.574 "max_connections_per_session": 2, 00:17:11.574 "max_queue_depth": 64, 00:17:11.574 "default_time2wait": 2, 00:17:11.574 "default_time2retain": 20, 00:17:11.574 "first_burst_length": 8192, 00:17:11.574 "immediate_data": true, 00:17:11.574 "allow_duplicated_isid": false, 00:17:11.574 "error_recovery_level": 0, 00:17:11.574 "nop_timeout": 60, 00:17:11.574 "nop_in_interval": 30, 00:17:11.574 "disable_chap": false, 00:17:11.574 "require_chap": false, 00:17:11.574 "mutual_chap": false, 00:17:11.574 "chap_group": 0, 00:17:11.574 "max_large_datain_per_connection": 64, 00:17:11.574 "max_r2t_per_connection": 4, 00:17:11.574 "pdu_pool_size": 36864, 00:17:11.574 "immediate_data_pool_size": 16384, 00:17:11.574 "data_out_pool_size": 2048 00:17:11.574 } 00:17:11.574 } 00:17:11.574 ] 00:17:11.574 } 00:17:11.574 ] 00:17:11.574 }' 00:17:11.574 12:15:08 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 73768 00:17:11.574 12:15:08 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 73768 ']' 00:17:11.574 12:15:08 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 73768 00:17:11.574 12:15:08 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:17:11.574 12:15:08 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:11.574 12:15:08 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73768 00:17:11.574 12:15:08 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:11.574 killing process with pid 73768 00:17:11.574 12:15:08 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:11.574 12:15:08 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73768' 00:17:11.574 12:15:08 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 73768 00:17:11.574 12:15:08 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 73768 00:17:11.574 [2024-11-25 12:15:09.150287] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:17:11.574 [2024-11-25 12:15:09.193984] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:11.574 [2024-11-25 12:15:09.194165] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:17:11.574 [2024-11-25 12:15:09.197232] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:11.574 [2024-11-25 12:15:09.197289] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:17:11.574 [2024-11-25 12:15:09.197302] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:17:11.574 [2024-11-25 12:15:09.197327] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:11.574 [2024-11-25 12:15:09.197470] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:11.574 12:15:10 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=73823 00:17:11.574 12:15:10 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 73823 00:17:11.574 12:15:10 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 73823 ']' 00:17:11.574 12:15:10 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:11.574 12:15:10 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:17:11.574 12:15:10 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:11.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:11.574 12:15:10 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:11.574 12:15:10 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:11.574 12:15:10 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:11.575 12:15:10 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:17:11.575 "subsystems": [ 00:17:11.575 { 00:17:11.575 "subsystem": "fsdev", 00:17:11.575 "config": [ 00:17:11.575 { 00:17:11.575 "method": "fsdev_set_opts", 00:17:11.575 "params": { 00:17:11.575 "fsdev_io_pool_size": 65535, 00:17:11.575 "fsdev_io_cache_size": 256 00:17:11.575 } 00:17:11.575 } 00:17:11.575 ] 00:17:11.575 }, 00:17:11.575 { 00:17:11.575 "subsystem": "keyring", 00:17:11.575 "config": [] 00:17:11.575 }, 00:17:11.575 { 00:17:11.575 "subsystem": "iobuf", 00:17:11.575 "config": [ 00:17:11.575 { 00:17:11.575 "method": "iobuf_set_options", 00:17:11.575 "params": { 00:17:11.575 "small_pool_count": 8192, 00:17:11.575 "large_pool_count": 1024, 00:17:11.575 "small_bufsize": 8192, 00:17:11.575 "large_bufsize": 135168, 00:17:11.575 "enable_numa": false 00:17:11.575 } 00:17:11.575 } 00:17:11.575 ] 00:17:11.575 }, 00:17:11.575 { 00:17:11.575 "subsystem": "sock", 00:17:11.575 "config": [ 00:17:11.575 { 00:17:11.575 "method": "sock_set_default_impl", 00:17:11.575 "params": { 00:17:11.575 "impl_name": "posix" 00:17:11.575 } 00:17:11.575 }, 00:17:11.575 { 00:17:11.575 "method": "sock_impl_set_options", 00:17:11.575 "params": { 00:17:11.575 "impl_name": "ssl", 00:17:11.575 "recv_buf_size": 4096, 00:17:11.575 "send_buf_size": 4096, 00:17:11.575 "enable_recv_pipe": true, 00:17:11.575 "enable_quickack": false, 00:17:11.575 "enable_placement_id": 0, 00:17:11.575 "enable_zerocopy_send_server": true, 00:17:11.575 "enable_zerocopy_send_client": false, 00:17:11.575 "zerocopy_threshold": 0, 00:17:11.575 "tls_version": 0, 00:17:11.575 "enable_ktls": false 00:17:11.575 } 00:17:11.575 }, 00:17:11.575 { 00:17:11.575 "method": "sock_impl_set_options", 00:17:11.575 "params": { 00:17:11.575 "impl_name": "posix", 00:17:11.575 "recv_buf_size": 2097152, 00:17:11.575 "send_buf_size": 2097152, 00:17:11.575 "enable_recv_pipe": true, 00:17:11.575 "enable_quickack": false, 00:17:11.575 "enable_placement_id": 0, 00:17:11.575 "enable_zerocopy_send_server": true, 00:17:11.575 "enable_zerocopy_send_client": false, 00:17:11.575 "zerocopy_threshold": 0, 00:17:11.575 "tls_version": 0, 00:17:11.575 "enable_ktls": false 00:17:11.575 } 00:17:11.575 } 00:17:11.575 ] 00:17:11.575 }, 00:17:11.575 { 00:17:11.575 "subsystem": "vmd", 00:17:11.575 "config": [] 00:17:11.575 }, 00:17:11.575 { 00:17:11.575 "subsystem": "accel", 00:17:11.575 "config": [ 00:17:11.575 { 00:17:11.575 "method": "accel_set_options", 00:17:11.575 "params": { 00:17:11.575 "small_cache_size": 128, 00:17:11.575 "large_cache_size": 16, 00:17:11.575 "task_count": 2048, 00:17:11.575 "sequence_count": 2048, 00:17:11.575 "buf_count": 2048 00:17:11.575 } 00:17:11.575 } 00:17:11.575 ] 00:17:11.575 }, 00:17:11.575 { 00:17:11.575 "subsystem": "bdev", 00:17:11.575 "config": [ 00:17:11.575 { 00:17:11.575 "method": "bdev_set_options", 00:17:11.575 "params": { 00:17:11.575 "bdev_io_pool_size": 65535, 00:17:11.575 "bdev_io_cache_size": 256, 00:17:11.575 "bdev_auto_examine": true, 00:17:11.575 "iobuf_small_cache_size": 128, 00:17:11.575 "iobuf_large_cache_size": 16 00:17:11.575 } 00:17:11.575 }, 00:17:11.575 { 00:17:11.575 "method": "bdev_raid_set_options", 00:17:11.575 "params": { 00:17:11.575 "process_window_size_kb": 1024, 00:17:11.575 "process_max_bandwidth_mb_sec": 0 00:17:11.575 } 00:17:11.575 }, 00:17:11.575 { 00:17:11.575 "method": "bdev_iscsi_set_options", 00:17:11.575 "params": { 00:17:11.575 "timeout_sec": 30 00:17:11.575 } 00:17:11.575 }, 00:17:11.575 { 00:17:11.575 "method": "bdev_nvme_set_options", 00:17:11.575 "params": { 00:17:11.575 "action_on_timeout": "none", 00:17:11.575 "timeout_us": 0, 00:17:11.575 "timeout_admin_us": 0, 00:17:11.575 "keep_alive_timeout_ms": 10000, 00:17:11.575 "arbitration_burst": 0, 00:17:11.575 "low_priority_weight": 0, 00:17:11.575 "medium_priority_weight": 0, 00:17:11.575 "high_priority_weight": 0, 00:17:11.575 "nvme_adminq_poll_period_us": 10000, 00:17:11.575 "nvme_ioq_poll_period_us": 0, 00:17:11.575 "io_queue_requests": 0, 00:17:11.575 "delay_cmd_submit": true, 00:17:11.575 "transport_retry_count": 4, 00:17:11.575 "bdev_retry_count": 3, 00:17:11.575 "transport_ack_timeout": 0, 00:17:11.575 "ctrlr_loss_timeout_sec": 0, 00:17:11.575 "reconnect_delay_sec": 0, 00:17:11.575 "fast_io_fail_timeout_sec": 0, 00:17:11.575 "disable_auto_failback": false, 00:17:11.575 "generate_uuids": false, 00:17:11.575 "transport_tos": 0, 00:17:11.575 "nvme_error_stat": false, 00:17:11.575 "rdma_srq_size": 0, 00:17:11.575 "io_path_stat": false, 00:17:11.575 "allow_accel_sequence": false, 00:17:11.575 "rdma_max_cq_size": 0, 00:17:11.575 "rdma_cm_event_timeout_ms": 0, 00:17:11.575 "dhchap_digests": [ 00:17:11.575 "sha256", 00:17:11.575 "sha384", 00:17:11.575 "sha512" 00:17:11.575 ], 00:17:11.575 "dhchap_dhgroups": [ 00:17:11.575 "null", 00:17:11.575 "ffdhe2048", 00:17:11.575 "ffdhe3072", 00:17:11.575 "ffdhe4096", 00:17:11.575 "ffdhe6144", 00:17:11.575 "ffdhe8192" 00:17:11.575 ] 00:17:11.575 } 00:17:11.575 }, 00:17:11.575 { 00:17:11.575 "method": "bdev_nvme_set_hotplug", 00:17:11.575 "params": { 00:17:11.575 "period_us": 100000, 00:17:11.575 "enable": false 00:17:11.575 } 00:17:11.575 }, 00:17:11.575 { 00:17:11.575 "method": "bdev_malloc_create", 00:17:11.575 "params": { 00:17:11.575 "name": "malloc0", 00:17:11.575 "num_blocks": 8192, 00:17:11.575 "block_size": 4096, 00:17:11.575 "physical_block_size": 4096, 00:17:11.575 "uuid": "4da2aa59-eb4d-4c78-bb39-0198914aa6da", 00:17:11.575 "optimal_io_boundary": 0, 00:17:11.575 "md_size": 0, 00:17:11.575 "dif_type": 0, 00:17:11.575 "dif_is_head_of_md": false, 00:17:11.575 "dif_pi_format": 0 00:17:11.575 } 00:17:11.575 }, 00:17:11.575 { 00:17:11.575 "method": "bdev_wait_for_examine" 00:17:11.575 } 00:17:11.575 ] 00:17:11.575 }, 00:17:11.575 { 00:17:11.575 "subsystem": "scsi", 00:17:11.575 "config": null 00:17:11.575 }, 00:17:11.575 { 00:17:11.575 "subsystem": "scheduler", 00:17:11.575 "config": [ 00:17:11.575 { 00:17:11.575 "method": "framework_set_scheduler", 00:17:11.575 "params": { 00:17:11.575 "name": "static" 00:17:11.575 } 00:17:11.575 } 00:17:11.575 ] 00:17:11.575 }, 00:17:11.575 { 00:17:11.575 "subsystem": "vhost_scsi", 00:17:11.575 "config": [] 00:17:11.575 }, 00:17:11.575 { 00:17:11.575 "subsystem": "vhost_blk", 00:17:11.575 "config": [] 00:17:11.575 }, 00:17:11.575 { 00:17:11.575 "subsystem": "ublk", 00:17:11.575 "config": [ 00:17:11.575 { 00:17:11.575 "method": "ublk_create_target", 00:17:11.575 "params": { 00:17:11.575 "cpumask": "1" 00:17:11.575 } 00:17:11.575 }, 00:17:11.575 { 00:17:11.575 "method": "ublk_start_disk", 00:17:11.575 "params": { 00:17:11.575 "bdev_name": "malloc0", 00:17:11.575 "ublk_id": 0, 00:17:11.575 "num_queues": 1, 00:17:11.575 "queue_depth": 128 00:17:11.575 } 00:17:11.575 } 00:17:11.575 ] 00:17:11.575 }, 00:17:11.575 { 00:17:11.575 "subsystem": "nbd", 00:17:11.575 "config": [] 00:17:11.575 }, 00:17:11.575 { 00:17:11.575 "subsystem": "nvmf", 00:17:11.575 "config": [ 00:17:11.575 { 00:17:11.575 "method": "nvmf_set_config", 00:17:11.575 "params": { 00:17:11.575 "discovery_filter": "match_any", 00:17:11.575 "admin_cmd_passthru": { 00:17:11.575 "identify_ctrlr": false 00:17:11.575 }, 00:17:11.575 "dhchap_digests": [ 00:17:11.575 "sha256", 00:17:11.575 "sha384", 00:17:11.575 "sha512" 00:17:11.575 ], 00:17:11.575 "dhchap_dhgroups": [ 00:17:11.575 "null", 00:17:11.575 "ffdhe2048", 00:17:11.575 "ffdhe3072", 00:17:11.575 "ffdhe4096", 00:17:11.575 "ffdhe6144", 00:17:11.575 "ffdhe8192" 00:17:11.575 ] 00:17:11.575 } 00:17:11.575 }, 00:17:11.575 { 00:17:11.575 "method": "nvmf_set_max_subsystems", 00:17:11.575 "params": { 00:17:11.575 "max_subsystems": 1024 00:17:11.575 } 00:17:11.575 }, 00:17:11.575 { 00:17:11.575 "method": "nvmf_set_crdt", 00:17:11.575 "params": { 00:17:11.575 "crdt1": 0, 00:17:11.575 "crdt2": 0, 00:17:11.575 "crdt3": 0 00:17:11.575 } 00:17:11.575 } 00:17:11.575 ] 00:17:11.575 }, 00:17:11.575 { 00:17:11.575 "subsystem": "iscsi", 00:17:11.575 "config": [ 00:17:11.575 { 00:17:11.575 "method": "iscsi_set_options", 00:17:11.575 "params": { 00:17:11.575 "node_base": "iqn.2016-06.io.spdk", 00:17:11.575 "max_sessions": 128, 00:17:11.575 "max_connections_per_session": 2, 00:17:11.575 "max_queue_depth": 64, 00:17:11.575 "default_time2wait": 2, 00:17:11.575 "default_time2retain": 20, 00:17:11.575 "first_burst_length": 8192, 00:17:11.575 "immediate_data": true, 00:17:11.575 "allow_duplicated_isid": false, 00:17:11.575 "error_recovery_level": 0, 00:17:11.575 "nop_timeout": 60, 00:17:11.575 "nop_in_interval": 30, 00:17:11.575 "disable_chap": false, 00:17:11.575 "require_chap": false, 00:17:11.576 "mutual_chap": false, 00:17:11.576 "chap_group": 0, 00:17:11.576 "max_large_datain_per_connection": 64, 00:17:11.576 "max_r2t_per_connection": 4, 00:17:11.576 "pdu_pool_size": 36864, 00:17:11.576 "immediate_data_pool_size": 16384, 00:17:11.576 "data_out_pool_size": 2048 00:17:11.576 } 00:17:11.576 } 00:17:11.576 ] 00:17:11.576 } 00:17:11.576 ] 00:17:11.576 }' 00:17:11.576 [2024-11-25 12:15:10.763941] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:17:11.576 [2024-11-25 12:15:10.764094] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73823 ] 00:17:11.576 [2024-11-25 12:15:10.922410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.576 [2024-11-25 12:15:11.024621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:11.576 [2024-11-25 12:15:11.784965] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:11.576 [2024-11-25 12:15:11.785803] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:11.576 [2024-11-25 12:15:11.793101] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:17:11.576 [2024-11-25 12:15:11.793180] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:17:11.576 [2024-11-25 12:15:11.793190] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:17:11.576 [2024-11-25 12:15:11.793197] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:17:11.576 [2024-11-25 12:15:11.802028] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:11.576 [2024-11-25 12:15:11.802055] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:11.576 [2024-11-25 12:15:11.808974] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:11.576 [2024-11-25 12:15:11.809083] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:17:11.576 [2024-11-25 12:15:11.825970] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:17:11.576 12:15:11 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:11.576 12:15:11 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:17:11.576 12:15:11 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:17:11.576 12:15:11 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:17:11.576 12:15:11 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.576 12:15:11 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:11.576 12:15:11 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.576 12:15:11 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:17:11.576 12:15:11 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:17:11.576 12:15:11 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 73823 00:17:11.576 12:15:11 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 73823 ']' 00:17:11.576 12:15:11 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 73823 00:17:11.576 12:15:11 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:17:11.576 12:15:11 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:11.576 12:15:11 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73823 00:17:11.576 12:15:11 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:11.576 12:15:11 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:11.576 12:15:11 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73823' 00:17:11.576 killing process with pid 73823 00:17:11.576 12:15:11 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 73823 00:17:11.576 12:15:11 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 73823 00:17:12.321 [2024-11-25 12:15:13.073111] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:17:12.321 [2024-11-25 12:15:13.112999] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:12.321 [2024-11-25 12:15:13.113137] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:17:12.321 [2024-11-25 12:15:13.120981] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:12.321 [2024-11-25 12:15:13.121043] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:17:12.321 [2024-11-25 12:15:13.121051] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:17:12.321 [2024-11-25 12:15:13.121076] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:12.321 [2024-11-25 12:15:13.121244] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:13.696 12:15:14 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:17:13.696 00:17:13.696 real 0m7.731s 00:17:13.696 user 0m5.465s 00:17:13.696 sys 0m2.895s 00:17:13.696 12:15:14 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:13.696 12:15:14 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:13.696 ************************************ 00:17:13.696 END TEST test_save_ublk_config 00:17:13.696 ************************************ 00:17:13.696 12:15:14 ublk -- ublk/ublk.sh@139 -- # spdk_pid=73898 00:17:13.696 12:15:14 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:17:13.696 12:15:14 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:13.696 12:15:14 ublk -- ublk/ublk.sh@141 -- # waitforlisten 73898 00:17:13.696 12:15:14 ublk -- common/autotest_common.sh@835 -- # '[' -z 73898 ']' 00:17:13.696 12:15:14 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:13.696 12:15:14 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:13.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:13.696 12:15:14 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:13.696 12:15:14 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:13.696 12:15:14 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:13.696 [2024-11-25 12:15:14.550628] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:17:13.696 [2024-11-25 12:15:14.550786] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73898 ] 00:17:13.696 [2024-11-25 12:15:14.705805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:13.954 [2024-11-25 12:15:14.792674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:13.954 [2024-11-25 12:15:14.792797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:14.520 12:15:15 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:14.520 12:15:15 ublk -- common/autotest_common.sh@868 -- # return 0 00:17:14.520 12:15:15 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:17:14.520 12:15:15 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:14.520 12:15:15 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:14.520 12:15:15 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:14.520 ************************************ 00:17:14.520 START TEST test_create_ublk 00:17:14.520 ************************************ 00:17:14.520 12:15:15 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:17:14.520 12:15:15 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:17:14.520 12:15:15 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.520 12:15:15 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:14.520 [2024-11-25 12:15:15.358966] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:14.520 [2024-11-25 12:15:15.360632] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:14.520 12:15:15 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.520 12:15:15 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:17:14.520 12:15:15 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:17:14.520 12:15:15 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.520 12:15:15 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:14.520 12:15:15 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.520 12:15:15 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:17:14.520 12:15:15 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:17:14.520 12:15:15 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.520 12:15:15 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:14.520 [2024-11-25 12:15:15.524112] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:17:14.520 [2024-11-25 12:15:15.524436] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:17:14.520 [2024-11-25 12:15:15.524451] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:17:14.520 [2024-11-25 12:15:15.524457] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:17:14.520 [2024-11-25 12:15:15.532008] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:14.520 [2024-11-25 12:15:15.532033] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:14.520 [2024-11-25 12:15:15.539984] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:14.520 [2024-11-25 12:15:15.553017] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:17:14.520 [2024-11-25 12:15:15.573983] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:17:14.520 12:15:15 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.520 12:15:15 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:17:14.520 12:15:15 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:17:14.520 12:15:15 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:17:14.520 12:15:15 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.520 12:15:15 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:14.777 12:15:15 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.777 12:15:15 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:17:14.777 { 00:17:14.777 "ublk_device": "/dev/ublkb0", 00:17:14.777 "id": 0, 00:17:14.777 "queue_depth": 512, 00:17:14.777 "num_queues": 4, 00:17:14.777 "bdev_name": "Malloc0" 00:17:14.777 } 00:17:14.777 ]' 00:17:14.777 12:15:15 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:17:14.777 12:15:15 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:17:14.777 12:15:15 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:17:14.777 12:15:15 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:17:14.777 12:15:15 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:17:14.777 12:15:15 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:17:14.777 12:15:15 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:17:14.777 12:15:15 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:17:14.777 12:15:15 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:17:14.777 12:15:15 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:17:14.777 12:15:15 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:17:14.777 12:15:15 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:17:14.777 12:15:15 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:17:14.777 12:15:15 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:17:14.777 12:15:15 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:17:14.777 12:15:15 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:17:14.777 12:15:15 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:17:14.777 12:15:15 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:17:14.777 12:15:15 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:17:14.777 12:15:15 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:17:14.777 12:15:15 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:17:14.777 12:15:15 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:17:14.777 fio: verification read phase will never start because write phase uses all of runtime 00:17:14.777 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:17:14.777 fio-3.35 00:17:14.777 Starting 1 process 00:17:26.979 00:17:26.979 fio_test: (groupid=0, jobs=1): err= 0: pid=73943: Mon Nov 25 12:15:25 2024 00:17:26.979 write: IOPS=18.5k, BW=72.2MiB/s (75.7MB/s)(722MiB/10001msec); 0 zone resets 00:17:26.979 clat (usec): min=35, max=3973, avg=53.26, stdev=88.95 00:17:26.979 lat (usec): min=35, max=3974, avg=53.74, stdev=88.97 00:17:26.979 clat percentiles (usec): 00:17:26.979 | 1.00th=[ 40], 5.00th=[ 42], 10.00th=[ 43], 20.00th=[ 45], 00:17:26.979 | 30.00th=[ 47], 40.00th=[ 48], 50.00th=[ 49], 60.00th=[ 50], 00:17:26.979 | 70.00th=[ 52], 80.00th=[ 55], 90.00th=[ 59], 95.00th=[ 64], 00:17:26.979 | 99.00th=[ 74], 99.50th=[ 84], 99.90th=[ 1663], 99.95th=[ 2671], 00:17:26.979 | 99.99th=[ 3490] 00:17:26.979 bw ( KiB/s): min=63912, max=80672, per=99.88%, avg=73812.63, stdev=4234.78, samples=19 00:17:26.979 iops : min=15978, max=20168, avg=18453.16, stdev=1058.69, samples=19 00:17:26.979 lat (usec) : 50=60.03%, 100=39.61%, 250=0.17%, 500=0.04%, 750=0.01% 00:17:26.979 lat (usec) : 1000=0.01% 00:17:26.979 lat (msec) : 2=0.04%, 4=0.08% 00:17:26.979 cpu : usr=2.78%, sys=15.57%, ctx=184760, majf=0, minf=795 00:17:26.979 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:26.979 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:26.979 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:26.979 issued rwts: total=0,184763,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:26.979 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:26.979 00:17:26.979 Run status group 0 (all jobs): 00:17:26.979 WRITE: bw=72.2MiB/s (75.7MB/s), 72.2MiB/s-72.2MiB/s (75.7MB/s-75.7MB/s), io=722MiB (757MB), run=10001-10001msec 00:17:26.979 00:17:26.979 Disk stats (read/write): 00:17:26.979 ublkb0: ios=0/182733, merge=0/0, ticks=0/8173, in_queue=8173, util=99.07% 00:17:26.979 12:15:25 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:17:26.979 12:15:25 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.979 12:15:25 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:26.979 [2024-11-25 12:15:25.975227] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:17:26.979 [2024-11-25 12:15:26.005520] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:26.979 [2024-11-25 12:15:26.006333] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:17:26.979 [2024-11-25 12:15:26.016013] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:26.979 [2024-11-25 12:15:26.020205] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:17:26.979 [2024-11-25 12:15:26.020233] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:17:26.979 12:15:26 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.979 12:15:26 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:17:26.979 12:15:26 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:17:26.979 12:15:26 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:17:26.979 12:15:26 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:26.979 12:15:26 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:26.979 12:15:26 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:26.979 12:15:26 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:26.979 12:15:26 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:17:26.979 12:15:26 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.979 12:15:26 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:26.979 [2024-11-25 12:15:26.024141] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:17:26.979 request: 00:17:26.979 { 00:17:26.979 "ublk_id": 0, 00:17:26.979 "method": "ublk_stop_disk", 00:17:26.979 "req_id": 1 00:17:26.980 } 00:17:26.980 Got JSON-RPC error response 00:17:26.980 response: 00:17:26.980 { 00:17:26.980 "code": -19, 00:17:26.980 "message": "No such device" 00:17:26.980 } 00:17:26.980 12:15:26 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:26.980 12:15:26 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:17:26.980 12:15:26 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:26.980 12:15:26 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:26.980 12:15:26 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:26.980 12:15:26 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:17:26.980 12:15:26 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.980 12:15:26 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:26.980 [2024-11-25 12:15:26.039071] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:26.980 [2024-11-25 12:15:26.042909] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:26.980 [2024-11-25 12:15:26.046985] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:26.980 12:15:26 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.980 12:15:26 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:17:26.980 12:15:26 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.980 12:15:26 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:26.980 12:15:26 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.980 12:15:26 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:17:26.980 12:15:26 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:17:26.980 12:15:26 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.980 12:15:26 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:26.980 12:15:26 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.980 12:15:26 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:17:26.980 12:15:26 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:17:26.980 12:15:26 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:17:26.980 12:15:26 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:17:26.980 12:15:26 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.980 12:15:26 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:26.980 12:15:26 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.980 12:15:26 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:17:26.980 12:15:26 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:17:26.980 ************************************ 00:17:26.980 END TEST test_create_ublk 00:17:26.980 ************************************ 00:17:26.980 12:15:26 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:17:26.980 00:17:26.980 real 0m11.170s 00:17:26.980 user 0m0.568s 00:17:26.980 sys 0m1.636s 00:17:26.980 12:15:26 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:26.980 12:15:26 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:26.980 12:15:26 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:17:26.980 12:15:26 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:26.980 12:15:26 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:26.980 12:15:26 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:26.980 ************************************ 00:17:26.980 START TEST test_create_multi_ublk 00:17:26.980 ************************************ 00:17:26.980 12:15:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:17:26.980 12:15:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:17:26.980 12:15:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.980 12:15:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:26.980 [2024-11-25 12:15:26.560964] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:26.980 [2024-11-25 12:15:26.562685] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:26.980 12:15:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.980 12:15:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:17:26.980 12:15:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:17:26.980 12:15:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:26.980 12:15:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:17:26.980 12:15:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.980 12:15:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:26.980 12:15:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.980 12:15:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:17:26.980 12:15:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:17:26.980 12:15:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.980 12:15:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:26.980 [2024-11-25 12:15:26.801125] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:17:26.980 [2024-11-25 12:15:26.801487] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:17:26.980 [2024-11-25 12:15:26.801499] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:17:26.980 [2024-11-25 12:15:26.801508] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:17:26.980 [2024-11-25 12:15:26.811197] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:26.980 [2024-11-25 12:15:26.811232] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:26.980 [2024-11-25 12:15:26.825003] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:26.980 [2024-11-25 12:15:26.825611] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:17:26.980 [2024-11-25 12:15:26.864994] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:17:26.980 12:15:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.980 12:15:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:17:26.980 12:15:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:26.980 12:15:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:17:26.980 12:15:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.980 12:15:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:26.980 12:15:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.980 12:15:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:17:26.980 12:15:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:17:26.980 12:15:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.980 12:15:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:26.980 [2024-11-25 12:15:27.084102] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:17:26.980 [2024-11-25 12:15:27.084422] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:17:26.980 [2024-11-25 12:15:27.084435] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:26.980 [2024-11-25 12:15:27.084441] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:17:26.980 [2024-11-25 12:15:27.092034] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:26.980 [2024-11-25 12:15:27.092065] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:26.980 [2024-11-25 12:15:27.098013] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:26.980 [2024-11-25 12:15:27.098605] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:17:26.980 [2024-11-25 12:15:27.101598] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:17:26.980 12:15:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.980 12:15:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:17:26.980 12:15:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:26.980 12:15:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:17:26.980 12:15:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.980 12:15:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:26.980 12:15:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.980 12:15:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:17:26.980 12:15:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:17:26.980 12:15:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.980 12:15:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:26.980 [2024-11-25 12:15:27.257275] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:17:26.980 [2024-11-25 12:15:27.257603] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:17:26.980 [2024-11-25 12:15:27.257614] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:17:26.980 [2024-11-25 12:15:27.257622] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:17:26.980 [2024-11-25 12:15:27.265236] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:26.980 [2024-11-25 12:15:27.265272] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:26.980 [2024-11-25 12:15:27.273015] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:26.980 [2024-11-25 12:15:27.273713] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:17:26.980 [2024-11-25 12:15:27.276338] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:17:26.980 12:15:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.980 12:15:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:17:26.980 12:15:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:26.980 12:15:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:17:26.980 12:15:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.980 12:15:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:26.980 12:15:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.980 12:15:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:17:26.981 12:15:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:17:26.981 12:15:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.981 12:15:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:26.981 [2024-11-25 12:15:27.446113] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:17:26.981 [2024-11-25 12:15:27.446445] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:17:26.981 [2024-11-25 12:15:27.446454] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:17:26.981 [2024-11-25 12:15:27.446459] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:17:26.981 [2024-11-25 12:15:27.454005] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:26.981 [2024-11-25 12:15:27.454030] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:26.981 [2024-11-25 12:15:27.462002] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:26.981 [2024-11-25 12:15:27.462580] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:17:26.981 [2024-11-25 12:15:27.467882] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:17:26.981 12:15:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.981 12:15:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:17:26.981 12:15:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:17:26.981 12:15:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.981 12:15:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:26.981 12:15:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.981 12:15:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:17:26.981 { 00:17:26.981 "ublk_device": "/dev/ublkb0", 00:17:26.981 "id": 0, 00:17:26.981 "queue_depth": 512, 00:17:26.981 "num_queues": 4, 00:17:26.981 "bdev_name": "Malloc0" 00:17:26.981 }, 00:17:26.981 { 00:17:26.981 "ublk_device": "/dev/ublkb1", 00:17:26.981 "id": 1, 00:17:26.981 "queue_depth": 512, 00:17:26.981 "num_queues": 4, 00:17:26.981 "bdev_name": "Malloc1" 00:17:26.981 }, 00:17:26.981 { 00:17:26.981 "ublk_device": "/dev/ublkb2", 00:17:26.981 "id": 2, 00:17:26.981 "queue_depth": 512, 00:17:26.981 "num_queues": 4, 00:17:26.981 "bdev_name": "Malloc2" 00:17:26.981 }, 00:17:26.981 { 00:17:26.981 "ublk_device": "/dev/ublkb3", 00:17:26.981 "id": 3, 00:17:26.981 "queue_depth": 512, 00:17:26.981 "num_queues": 4, 00:17:26.981 "bdev_name": "Malloc3" 00:17:26.981 } 00:17:26.981 ]' 00:17:26.981 12:15:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:17:26.981 12:15:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:26.981 12:15:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:17:26.981 12:15:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:17:26.981 12:15:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:17:26.981 12:15:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:17:26.981 12:15:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:17:26.981 12:15:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:26.981 12:15:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:17:26.981 12:15:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:26.981 12:15:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:17:26.981 12:15:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:17:26.981 12:15:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:26.981 12:15:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:17:26.981 12:15:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:17:26.981 12:15:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:17:26.981 12:15:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:17:26.981 12:15:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:17:26.981 12:15:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:26.981 12:15:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:17:26.981 12:15:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:26.981 12:15:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:17:26.981 12:15:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:17:26.981 12:15:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:26.981 12:15:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:17:26.981 12:15:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:17:26.981 12:15:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:17:26.981 12:15:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:17:26.981 12:15:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:17:26.981 12:15:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:26.981 12:15:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:17:26.981 12:15:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:26.981 12:15:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:17:26.981 12:15:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:17:26.981 12:15:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:26.981 12:15:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:17:26.981 12:15:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:17:26.981 12:15:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:17:26.981 12:15:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:17:26.981 12:15:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:17:26.981 12:15:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:26.981 12:15:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:17:27.239 12:15:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:27.239 12:15:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:17:27.239 12:15:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:17:27.239 12:15:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:17:27.239 12:15:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:17:27.239 12:15:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:27.239 12:15:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:17:27.239 12:15:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.239 12:15:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:27.239 [2024-11-25 12:15:28.106078] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:17:27.239 [2024-11-25 12:15:28.153995] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:27.239 [2024-11-25 12:15:28.154760] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:17:27.239 [2024-11-25 12:15:28.162000] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:27.239 [2024-11-25 12:15:28.162268] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:17:27.239 [2024-11-25 12:15:28.162280] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:17:27.239 12:15:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.239 12:15:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:27.239 12:15:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:17:27.239 12:15:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.239 12:15:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:27.239 [2024-11-25 12:15:28.178049] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:17:27.239 [2024-11-25 12:15:28.217492] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:27.239 [2024-11-25 12:15:28.218444] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:17:27.239 [2024-11-25 12:15:28.224983] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:27.239 [2024-11-25 12:15:28.225246] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:17:27.239 [2024-11-25 12:15:28.225259] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:17:27.239 12:15:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.239 12:15:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:27.239 12:15:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:17:27.239 12:15:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.239 12:15:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:27.239 [2024-11-25 12:15:28.241076] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:17:27.239 [2024-11-25 12:15:28.274477] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:27.239 [2024-11-25 12:15:28.275425] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:17:27.239 [2024-11-25 12:15:28.280985] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:27.239 [2024-11-25 12:15:28.281247] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:17:27.239 [2024-11-25 12:15:28.281262] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:17:27.239 12:15:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.239 12:15:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:27.239 12:15:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:17:27.239 12:15:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.239 12:15:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:27.239 [2024-11-25 12:15:28.297081] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:17:27.497 [2024-11-25 12:15:28.335466] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:27.497 [2024-11-25 12:15:28.336400] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:17:27.497 [2024-11-25 12:15:28.344992] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:27.497 [2024-11-25 12:15:28.345254] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:17:27.497 [2024-11-25 12:15:28.345267] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:17:27.497 12:15:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.497 12:15:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:17:27.755 [2024-11-25 12:15:28.593045] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:27.755 [2024-11-25 12:15:28.596802] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:27.755 [2024-11-25 12:15:28.596841] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:27.755 12:15:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:17:27.755 12:15:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:27.755 12:15:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:17:27.755 12:15:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.755 12:15:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:28.013 12:15:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.013 12:15:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:28.013 12:15:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:17:28.013 12:15:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.013 12:15:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:28.292 12:15:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.292 12:15:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:28.292 12:15:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:17:28.292 12:15:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.292 12:15:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:28.551 12:15:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.551 12:15:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:28.551 12:15:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:17:28.551 12:15:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.551 12:15:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:28.810 12:15:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.810 12:15:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:17:28.810 12:15:29 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:17:28.810 12:15:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.810 12:15:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:28.810 12:15:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.810 12:15:29 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:17:28.810 12:15:29 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:17:28.810 12:15:29 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:17:28.810 12:15:29 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:17:28.810 12:15:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.810 12:15:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:28.810 12:15:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.810 12:15:29 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:17:28.810 12:15:29 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:17:28.810 12:15:29 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:17:28.810 00:17:28.810 real 0m3.258s 00:17:28.810 user 0m0.832s 00:17:28.810 sys 0m0.131s 00:17:28.810 12:15:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:28.810 12:15:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:28.810 ************************************ 00:17:28.810 END TEST test_create_multi_ublk 00:17:28.810 ************************************ 00:17:28.810 12:15:29 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:17:28.810 12:15:29 ublk -- ublk/ublk.sh@147 -- # cleanup 00:17:28.810 12:15:29 ublk -- ublk/ublk.sh@130 -- # killprocess 73898 00:17:28.810 12:15:29 ublk -- common/autotest_common.sh@954 -- # '[' -z 73898 ']' 00:17:28.810 12:15:29 ublk -- common/autotest_common.sh@958 -- # kill -0 73898 00:17:28.810 12:15:29 ublk -- common/autotest_common.sh@959 -- # uname 00:17:28.810 12:15:29 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:28.810 12:15:29 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73898 00:17:28.810 12:15:29 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:28.810 12:15:29 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:28.810 killing process with pid 73898 00:17:28.810 12:15:29 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73898' 00:17:28.810 12:15:29 ublk -- common/autotest_common.sh@973 -- # kill 73898 00:17:28.810 12:15:29 ublk -- common/autotest_common.sh@978 -- # wait 73898 00:17:29.376 [2024-11-25 12:15:30.410065] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:29.376 [2024-11-25 12:15:30.410119] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:30.308 00:17:30.308 real 0m24.565s 00:17:30.308 user 0m35.120s 00:17:30.308 sys 0m9.511s 00:17:30.308 12:15:31 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:30.308 12:15:31 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:30.308 ************************************ 00:17:30.308 END TEST ublk 00:17:30.308 ************************************ 00:17:30.308 12:15:31 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:17:30.308 12:15:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:30.308 12:15:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:30.308 12:15:31 -- common/autotest_common.sh@10 -- # set +x 00:17:30.309 ************************************ 00:17:30.309 START TEST ublk_recovery 00:17:30.309 ************************************ 00:17:30.309 12:15:31 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:17:30.309 * Looking for test storage... 00:17:30.309 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:17:30.309 12:15:31 ublk_recovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:30.309 12:15:31 ublk_recovery -- common/autotest_common.sh@1693 -- # lcov --version 00:17:30.309 12:15:31 ublk_recovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:30.309 12:15:31 ublk_recovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:30.309 12:15:31 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:30.309 12:15:31 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:30.309 12:15:31 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:30.309 12:15:31 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:17:30.309 12:15:31 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:17:30.309 12:15:31 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:17:30.309 12:15:31 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:17:30.309 12:15:31 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:17:30.309 12:15:31 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:17:30.309 12:15:31 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:17:30.309 12:15:31 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:30.309 12:15:31 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:17:30.309 12:15:31 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:17:30.309 12:15:31 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:30.309 12:15:31 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:30.309 12:15:31 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:17:30.309 12:15:31 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:17:30.309 12:15:31 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:30.309 12:15:31 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:17:30.309 12:15:31 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:17:30.309 12:15:31 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:17:30.309 12:15:31 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:17:30.309 12:15:31 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:30.309 12:15:31 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:17:30.309 12:15:31 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:17:30.309 12:15:31 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:30.309 12:15:31 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:30.309 12:15:31 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:17:30.309 12:15:31 ublk_recovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:30.309 12:15:31 ublk_recovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:30.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.309 --rc genhtml_branch_coverage=1 00:17:30.309 --rc genhtml_function_coverage=1 00:17:30.309 --rc genhtml_legend=1 00:17:30.309 --rc geninfo_all_blocks=1 00:17:30.309 --rc geninfo_unexecuted_blocks=1 00:17:30.309 00:17:30.309 ' 00:17:30.309 12:15:31 ublk_recovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:30.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.309 --rc genhtml_branch_coverage=1 00:17:30.309 --rc genhtml_function_coverage=1 00:17:30.309 --rc genhtml_legend=1 00:17:30.309 --rc geninfo_all_blocks=1 00:17:30.309 --rc geninfo_unexecuted_blocks=1 00:17:30.309 00:17:30.309 ' 00:17:30.309 12:15:31 ublk_recovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:30.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.309 --rc genhtml_branch_coverage=1 00:17:30.309 --rc genhtml_function_coverage=1 00:17:30.309 --rc genhtml_legend=1 00:17:30.309 --rc geninfo_all_blocks=1 00:17:30.309 --rc geninfo_unexecuted_blocks=1 00:17:30.309 00:17:30.309 ' 00:17:30.309 12:15:31 ublk_recovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:30.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.309 --rc genhtml_branch_coverage=1 00:17:30.309 --rc genhtml_function_coverage=1 00:17:30.309 --rc genhtml_legend=1 00:17:30.309 --rc geninfo_all_blocks=1 00:17:30.309 --rc geninfo_unexecuted_blocks=1 00:17:30.309 00:17:30.309 ' 00:17:30.309 12:15:31 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:17:30.309 12:15:31 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:17:30.309 12:15:31 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:17:30.309 12:15:31 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:17:30.309 12:15:31 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:17:30.309 12:15:31 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:17:30.309 12:15:31 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:17:30.309 12:15:31 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:17:30.309 12:15:31 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:17:30.309 12:15:31 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:17:30.309 12:15:31 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=74293 00:17:30.309 12:15:31 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:30.309 12:15:31 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 74293 00:17:30.309 12:15:31 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 74293 ']' 00:17:30.309 12:15:31 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:30.309 12:15:31 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:30.309 12:15:31 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:17:30.309 12:15:31 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:30.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:30.309 12:15:31 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:30.309 12:15:31 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:30.309 [2024-11-25 12:15:31.320244] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:17:30.309 [2024-11-25 12:15:31.320694] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74293 ] 00:17:30.566 [2024-11-25 12:15:31.472758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:30.566 [2024-11-25 12:15:31.559613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:30.566 [2024-11-25 12:15:31.559928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:31.172 12:15:32 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:31.172 12:15:32 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:17:31.172 12:15:32 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:17:31.172 12:15:32 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.172 12:15:32 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:31.172 [2024-11-25 12:15:32.108967] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:31.172 [2024-11-25 12:15:32.110613] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:31.172 12:15:32 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.172 12:15:32 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:17:31.172 12:15:32 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.172 12:15:32 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:31.172 malloc0 00:17:31.172 12:15:32 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.172 12:15:32 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:17:31.172 12:15:32 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.172 12:15:32 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:31.172 [2024-11-25 12:15:32.197095] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:17:31.172 [2024-11-25 12:15:32.197187] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:17:31.172 [2024-11-25 12:15:32.197199] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:31.172 [2024-11-25 12:15:32.197207] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:17:31.172 [2024-11-25 12:15:32.206049] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:31.172 [2024-11-25 12:15:32.206075] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:31.172 [2024-11-25 12:15:32.212976] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:31.172 [2024-11-25 12:15:32.213113] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:17:31.172 [2024-11-25 12:15:32.234989] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:17:31.172 1 00:17:31.172 12:15:32 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.172 12:15:32 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:17:32.548 12:15:33 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=74327 00:17:32.549 12:15:33 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:17:32.549 12:15:33 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:17:32.549 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:32.549 fio-3.35 00:17:32.549 Starting 1 process 00:17:37.814 12:15:38 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 74293 00:17:37.814 12:15:38 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:17:43.072 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 74293 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:17:43.072 12:15:43 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=74438 00:17:43.072 12:15:43 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:43.072 12:15:43 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 74438 00:17:43.072 12:15:43 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:17:43.072 12:15:43 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 74438 ']' 00:17:43.072 12:15:43 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:43.072 12:15:43 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:43.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:43.072 12:15:43 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:43.072 12:15:43 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:43.072 12:15:43 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:43.072 [2024-11-25 12:15:43.330162] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:17:43.072 [2024-11-25 12:15:43.330285] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74438 ] 00:17:43.072 [2024-11-25 12:15:43.488479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:43.072 [2024-11-25 12:15:43.577581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:43.072 [2024-11-25 12:15:43.577858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:43.330 12:15:44 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:43.330 12:15:44 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:17:43.330 12:15:44 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:17:43.330 12:15:44 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.330 12:15:44 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:43.330 [2024-11-25 12:15:44.238980] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:43.330 [2024-11-25 12:15:44.240838] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:43.330 12:15:44 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.330 12:15:44 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:17:43.330 12:15:44 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.330 12:15:44 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:43.330 malloc0 00:17:43.330 12:15:44 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.330 12:15:44 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:17:43.330 12:15:44 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.330 12:15:44 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:43.330 [2024-11-25 12:15:44.327127] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:17:43.330 [2024-11-25 12:15:44.327169] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:43.330 [2024-11-25 12:15:44.327178] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:17:43.330 [2024-11-25 12:15:44.335036] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:17:43.330 [2024-11-25 12:15:44.335071] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:17:43.330 1 00:17:43.330 12:15:44 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.330 12:15:44 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 74327 00:17:44.265 [2024-11-25 12:15:45.335985] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:17:44.523 [2024-11-25 12:15:45.343984] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:17:44.523 [2024-11-25 12:15:45.344013] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:17:45.457 [2024-11-25 12:15:46.344051] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:17:45.457 [2024-11-25 12:15:46.347981] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:17:45.457 [2024-11-25 12:15:46.348003] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:17:46.389 [2024-11-25 12:15:47.348032] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:17:46.389 [2024-11-25 12:15:47.355971] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:17:46.389 [2024-11-25 12:15:47.355989] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:17:46.389 [2024-11-25 12:15:47.355998] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:17:46.389 [2024-11-25 12:15:47.356080] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:18:08.347 [2024-11-25 12:16:08.608980] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:18:08.347 [2024-11-25 12:16:08.615509] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:18:08.347 [2024-11-25 12:16:08.623159] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:18:08.347 [2024-11-25 12:16:08.623182] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:18:34.952 00:18:34.952 fio_test: (groupid=0, jobs=1): err= 0: pid=74331: Mon Nov 25 12:16:33 2024 00:18:34.952 read: IOPS=13.8k, BW=54.0MiB/s (56.6MB/s)(3241MiB/60003msec) 00:18:34.952 slat (nsec): min=917, max=256820, avg=5117.39, stdev=1958.11 00:18:34.952 clat (usec): min=851, max=30384k, avg=4621.70, stdev=268875.19 00:18:34.952 lat (usec): min=856, max=30384k, avg=4626.82, stdev=268875.19 00:18:34.952 clat percentiles (usec): 00:18:34.952 | 1.00th=[ 1696], 5.00th=[ 1844], 10.00th=[ 1893], 20.00th=[ 1926], 00:18:34.952 | 30.00th=[ 1958], 40.00th=[ 1991], 50.00th=[ 2024], 60.00th=[ 2089], 00:18:34.952 | 70.00th=[ 2343], 80.00th=[ 2409], 90.00th=[ 2507], 95.00th=[ 3261], 00:18:34.952 | 99.00th=[ 5145], 99.50th=[ 5669], 99.90th=[ 7701], 99.95th=[ 8848], 00:18:34.952 | 99.99th=[13435] 00:18:34.952 bw ( KiB/s): min=38712, max=125696, per=100.00%, avg=110574.88, stdev=16816.47, samples=59 00:18:34.952 iops : min= 9678, max=31424, avg=27643.71, stdev=4204.11, samples=59 00:18:34.952 write: IOPS=13.8k, BW=54.0MiB/s (56.6MB/s)(3237MiB/60003msec); 0 zone resets 00:18:34.952 slat (nsec): min=933, max=2977.7k, avg=5170.46, stdev=3798.03 00:18:34.952 clat (usec): min=723, max=30384k, avg=4628.49, stdev=264880.32 00:18:34.952 lat (usec): min=729, max=30384k, avg=4633.66, stdev=264880.33 00:18:34.952 clat percentiles (usec): 00:18:34.952 | 1.00th=[ 1729], 5.00th=[ 1926], 10.00th=[ 1975], 20.00th=[ 2008], 00:18:34.952 | 30.00th=[ 2040], 40.00th=[ 2073], 50.00th=[ 2114], 60.00th=[ 2180], 00:18:34.952 | 70.00th=[ 2442], 80.00th=[ 2507], 90.00th=[ 2606], 95.00th=[ 3228], 00:18:34.952 | 99.00th=[ 5211], 99.50th=[ 5800], 99.90th=[ 7767], 99.95th=[ 8717], 00:18:34.952 | 99.99th=[13304] 00:18:34.952 bw ( KiB/s): min=38648, max=126016, per=100.00%, avg=110437.27, stdev=16733.44, samples=59 00:18:34.952 iops : min= 9662, max=31504, avg=27609.31, stdev=4183.35, samples=59 00:18:34.952 lat (usec) : 750=0.01%, 1000=0.01% 00:18:34.952 lat (msec) : 2=30.38%, 4=66.57%, 10=3.01%, 20=0.04%, >=2000=0.01% 00:18:34.952 cpu : usr=3.21%, sys=14.54%, ctx=57723, majf=0, minf=14 00:18:34.952 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:18:34.952 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:34.952 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:34.952 issued rwts: total=829808,828728,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:34.952 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:34.952 00:18:34.952 Run status group 0 (all jobs): 00:18:34.952 READ: bw=54.0MiB/s (56.6MB/s), 54.0MiB/s-54.0MiB/s (56.6MB/s-56.6MB/s), io=3241MiB (3399MB), run=60003-60003msec 00:18:34.952 WRITE: bw=54.0MiB/s (56.6MB/s), 54.0MiB/s-54.0MiB/s (56.6MB/s-56.6MB/s), io=3237MiB (3394MB), run=60003-60003msec 00:18:34.952 00:18:34.952 Disk stats (read/write): 00:18:34.952 ublkb1: ios=826486/825391, merge=0/0, ticks=3782458/3715947, in_queue=7498406, util=99.90% 00:18:34.952 12:16:33 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:18:34.952 12:16:33 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.952 12:16:33 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:34.952 [2024-11-25 12:16:33.486982] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:18:34.952 [2024-11-25 12:16:33.526990] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:34.952 [2024-11-25 12:16:33.527134] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:18:34.952 [2024-11-25 12:16:33.538964] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:34.952 [2024-11-25 12:16:33.539071] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:18:34.952 [2024-11-25 12:16:33.539079] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:18:34.952 12:16:33 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.952 12:16:33 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:18:34.952 12:16:33 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.952 12:16:33 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:34.952 [2024-11-25 12:16:33.543127] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:34.952 [2024-11-25 12:16:33.549962] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:34.952 [2024-11-25 12:16:33.549997] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:18:34.952 12:16:33 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.952 12:16:33 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:18:34.952 12:16:33 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:18:34.952 12:16:33 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 74438 00:18:34.952 12:16:33 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 74438 ']' 00:18:34.952 12:16:33 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 74438 00:18:34.952 12:16:33 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:18:34.952 12:16:33 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:34.952 12:16:33 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74438 00:18:34.952 12:16:33 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:34.952 12:16:33 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:34.952 killing process with pid 74438 00:18:34.952 12:16:33 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74438' 00:18:34.952 12:16:33 ublk_recovery -- common/autotest_common.sh@973 -- # kill 74438 00:18:34.952 12:16:33 ublk_recovery -- common/autotest_common.sh@978 -- # wait 74438 00:18:34.952 [2024-11-25 12:16:34.694218] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:34.952 [2024-11-25 12:16:34.694260] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:34.952 ************************************ 00:18:34.952 END TEST ublk_recovery 00:18:34.952 ************************************ 00:18:34.952 00:18:34.952 real 1m4.318s 00:18:34.952 user 1m48.174s 00:18:34.952 sys 0m20.546s 00:18:34.952 12:16:35 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:34.952 12:16:35 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:34.952 12:16:35 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:18:34.952 12:16:35 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:18:34.952 12:16:35 -- spdk/autotest.sh@260 -- # timing_exit lib 00:18:34.952 12:16:35 -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:34.952 12:16:35 -- common/autotest_common.sh@10 -- # set +x 00:18:34.952 12:16:35 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:18:34.952 12:16:35 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:18:34.952 12:16:35 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:18:34.952 12:16:35 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:18:34.952 12:16:35 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:18:34.952 12:16:35 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:18:34.952 12:16:35 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:18:34.952 12:16:35 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:18:34.952 12:16:35 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:18:34.952 12:16:35 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:18:34.952 12:16:35 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:18:34.952 12:16:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:34.952 12:16:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:34.952 12:16:35 -- common/autotest_common.sh@10 -- # set +x 00:18:34.952 ************************************ 00:18:34.952 START TEST ftl 00:18:34.952 ************************************ 00:18:34.952 12:16:35 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:18:34.953 * Looking for test storage... 00:18:34.953 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:34.953 12:16:35 ftl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:34.953 12:16:35 ftl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:34.953 12:16:35 ftl -- common/autotest_common.sh@1693 -- # lcov --version 00:18:34.953 12:16:35 ftl -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:34.953 12:16:35 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:34.953 12:16:35 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:34.953 12:16:35 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:34.953 12:16:35 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:18:34.953 12:16:35 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:18:34.953 12:16:35 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:18:34.953 12:16:35 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:18:34.953 12:16:35 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:18:34.953 12:16:35 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:18:34.953 12:16:35 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:18:34.953 12:16:35 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:34.953 12:16:35 ftl -- scripts/common.sh@344 -- # case "$op" in 00:18:34.953 12:16:35 ftl -- scripts/common.sh@345 -- # : 1 00:18:34.953 12:16:35 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:34.953 12:16:35 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:34.953 12:16:35 ftl -- scripts/common.sh@365 -- # decimal 1 00:18:34.953 12:16:35 ftl -- scripts/common.sh@353 -- # local d=1 00:18:34.953 12:16:35 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:34.953 12:16:35 ftl -- scripts/common.sh@355 -- # echo 1 00:18:34.953 12:16:35 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:18:34.953 12:16:35 ftl -- scripts/common.sh@366 -- # decimal 2 00:18:34.953 12:16:35 ftl -- scripts/common.sh@353 -- # local d=2 00:18:34.953 12:16:35 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:34.953 12:16:35 ftl -- scripts/common.sh@355 -- # echo 2 00:18:34.953 12:16:35 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:18:34.953 12:16:35 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:34.953 12:16:35 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:34.953 12:16:35 ftl -- scripts/common.sh@368 -- # return 0 00:18:34.953 12:16:35 ftl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:34.953 12:16:35 ftl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:34.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.953 --rc genhtml_branch_coverage=1 00:18:34.953 --rc genhtml_function_coverage=1 00:18:34.953 --rc genhtml_legend=1 00:18:34.953 --rc geninfo_all_blocks=1 00:18:34.953 --rc geninfo_unexecuted_blocks=1 00:18:34.953 00:18:34.953 ' 00:18:34.953 12:16:35 ftl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:34.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.953 --rc genhtml_branch_coverage=1 00:18:34.953 --rc genhtml_function_coverage=1 00:18:34.953 --rc genhtml_legend=1 00:18:34.953 --rc geninfo_all_blocks=1 00:18:34.953 --rc geninfo_unexecuted_blocks=1 00:18:34.953 00:18:34.953 ' 00:18:34.953 12:16:35 ftl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:34.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.953 --rc genhtml_branch_coverage=1 00:18:34.953 --rc genhtml_function_coverage=1 00:18:34.953 --rc genhtml_legend=1 00:18:34.953 --rc geninfo_all_blocks=1 00:18:34.953 --rc geninfo_unexecuted_blocks=1 00:18:34.953 00:18:34.953 ' 00:18:34.953 12:16:35 ftl -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:34.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.953 --rc genhtml_branch_coverage=1 00:18:34.953 --rc genhtml_function_coverage=1 00:18:34.953 --rc genhtml_legend=1 00:18:34.953 --rc geninfo_all_blocks=1 00:18:34.953 --rc geninfo_unexecuted_blocks=1 00:18:34.953 00:18:34.953 ' 00:18:34.953 12:16:35 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:34.953 12:16:35 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:18:34.953 12:16:35 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:34.953 12:16:35 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:34.953 12:16:35 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:34.953 12:16:35 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:34.953 12:16:35 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:34.953 12:16:35 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:34.953 12:16:35 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:34.953 12:16:35 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:34.953 12:16:35 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:34.953 12:16:35 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:34.953 12:16:35 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:34.953 12:16:35 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:34.953 12:16:35 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:34.953 12:16:35 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:34.953 12:16:35 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:34.953 12:16:35 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:34.953 12:16:35 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:34.953 12:16:35 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:34.953 12:16:35 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:34.953 12:16:35 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:34.953 12:16:35 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:34.953 12:16:35 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:34.953 12:16:35 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:34.953 12:16:35 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:34.953 12:16:35 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:34.953 12:16:35 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:34.953 12:16:35 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:34.953 12:16:35 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:34.953 12:16:35 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:18:34.953 12:16:35 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:18:34.953 12:16:35 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:18:34.953 12:16:35 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:18:34.953 12:16:35 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:34.953 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:35.212 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:35.212 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:35.212 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:35.212 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:35.212 12:16:36 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=75239 00:18:35.212 12:16:36 ftl -- ftl/ftl.sh@38 -- # waitforlisten 75239 00:18:35.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:35.212 12:16:36 ftl -- common/autotest_common.sh@835 -- # '[' -z 75239 ']' 00:18:35.213 12:16:36 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:35.213 12:16:36 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:35.213 12:16:36 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:35.213 12:16:36 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:35.213 12:16:36 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:35.213 12:16:36 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:18:35.213 [2024-11-25 12:16:36.172366] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:18:35.213 [2024-11-25 12:16:36.172490] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75239 ] 00:18:35.470 [2024-11-25 12:16:36.332438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:35.470 [2024-11-25 12:16:36.431479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:36.038 12:16:37 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:36.038 12:16:37 ftl -- common/autotest_common.sh@868 -- # return 0 00:18:36.038 12:16:37 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:18:36.296 12:16:37 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:18:37.231 12:16:38 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:18:37.231 12:16:38 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:18:37.489 12:16:38 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:18:37.489 12:16:38 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:18:37.489 12:16:38 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:18:37.747 12:16:38 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:18:37.747 12:16:38 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:18:37.747 12:16:38 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:18:37.747 12:16:38 ftl -- ftl/ftl.sh@50 -- # break 00:18:37.747 12:16:38 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:18:37.747 12:16:38 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:18:37.747 12:16:38 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:18:37.747 12:16:38 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:18:38.005 12:16:38 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:18:38.005 12:16:38 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:18:38.005 12:16:38 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:18:38.005 12:16:38 ftl -- ftl/ftl.sh@63 -- # break 00:18:38.005 12:16:38 ftl -- ftl/ftl.sh@66 -- # killprocess 75239 00:18:38.005 12:16:38 ftl -- common/autotest_common.sh@954 -- # '[' -z 75239 ']' 00:18:38.005 12:16:38 ftl -- common/autotest_common.sh@958 -- # kill -0 75239 00:18:38.005 12:16:38 ftl -- common/autotest_common.sh@959 -- # uname 00:18:38.005 12:16:38 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:38.005 12:16:38 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75239 00:18:38.005 12:16:38 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:38.005 12:16:38 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:38.005 killing process with pid 75239 00:18:38.005 12:16:38 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75239' 00:18:38.005 12:16:38 ftl -- common/autotest_common.sh@973 -- # kill 75239 00:18:38.005 12:16:38 ftl -- common/autotest_common.sh@978 -- # wait 75239 00:18:39.377 12:16:40 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:18:39.377 12:16:40 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:18:39.377 12:16:40 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:39.377 12:16:40 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:39.377 12:16:40 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:39.377 ************************************ 00:18:39.377 START TEST ftl_fio_basic 00:18:39.377 ************************************ 00:18:39.377 12:16:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:18:39.377 * Looking for test storage... 00:18:39.377 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:39.377 12:16:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:39.377 12:16:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:39.377 12:16:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lcov --version 00:18:39.377 12:16:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:39.377 12:16:40 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:39.377 12:16:40 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:39.377 12:16:40 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:39.377 12:16:40 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:18:39.377 12:16:40 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:18:39.377 12:16:40 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:18:39.377 12:16:40 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:18:39.377 12:16:40 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:18:39.377 12:16:40 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:18:39.377 12:16:40 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:18:39.377 12:16:40 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:39.377 12:16:40 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:18:39.377 12:16:40 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:18:39.377 12:16:40 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:39.377 12:16:40 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:39.377 12:16:40 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:18:39.377 12:16:40 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:18:39.377 12:16:40 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:39.377 12:16:40 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:18:39.377 12:16:40 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:18:39.377 12:16:40 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:18:39.377 12:16:40 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:18:39.377 12:16:40 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:39.377 12:16:40 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:18:39.377 12:16:40 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:18:39.377 12:16:40 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:39.377 12:16:40 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:39.377 12:16:40 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:18:39.377 12:16:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:39.377 12:16:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:39.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:39.377 --rc genhtml_branch_coverage=1 00:18:39.377 --rc genhtml_function_coverage=1 00:18:39.377 --rc genhtml_legend=1 00:18:39.377 --rc geninfo_all_blocks=1 00:18:39.377 --rc geninfo_unexecuted_blocks=1 00:18:39.377 00:18:39.377 ' 00:18:39.377 12:16:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:39.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:39.378 --rc genhtml_branch_coverage=1 00:18:39.378 --rc genhtml_function_coverage=1 00:18:39.378 --rc genhtml_legend=1 00:18:39.378 --rc geninfo_all_blocks=1 00:18:39.378 --rc geninfo_unexecuted_blocks=1 00:18:39.378 00:18:39.378 ' 00:18:39.378 12:16:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:39.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:39.378 --rc genhtml_branch_coverage=1 00:18:39.378 --rc genhtml_function_coverage=1 00:18:39.378 --rc genhtml_legend=1 00:18:39.378 --rc geninfo_all_blocks=1 00:18:39.378 --rc geninfo_unexecuted_blocks=1 00:18:39.378 00:18:39.378 ' 00:18:39.378 12:16:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:39.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:39.378 --rc genhtml_branch_coverage=1 00:18:39.378 --rc genhtml_function_coverage=1 00:18:39.378 --rc genhtml_legend=1 00:18:39.378 --rc geninfo_all_blocks=1 00:18:39.378 --rc geninfo_unexecuted_blocks=1 00:18:39.378 00:18:39.378 ' 00:18:39.378 12:16:40 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:39.378 12:16:40 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:18:39.378 12:16:40 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:39.378 12:16:40 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:39.378 12:16:40 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:39.378 12:16:40 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:39.378 12:16:40 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:39.378 12:16:40 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:39.378 12:16:40 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:39.378 12:16:40 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:39.378 12:16:40 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:39.378 12:16:40 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:39.378 12:16:40 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:39.378 12:16:40 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:39.378 12:16:40 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:39.378 12:16:40 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:39.378 12:16:40 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:39.378 12:16:40 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:39.378 12:16:40 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:39.378 12:16:40 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:39.378 12:16:40 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:39.378 12:16:40 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:39.378 12:16:40 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:39.378 12:16:40 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:39.378 12:16:40 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:39.378 12:16:40 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:39.378 12:16:40 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:39.378 12:16:40 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:39.378 12:16:40 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:39.635 12:16:40 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:18:39.635 12:16:40 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:18:39.635 12:16:40 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:18:39.635 12:16:40 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:18:39.635 12:16:40 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:39.635 12:16:40 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:18:39.635 12:16:40 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:18:39.635 12:16:40 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:18:39.635 12:16:40 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:18:39.635 12:16:40 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:18:39.635 12:16:40 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:18:39.635 12:16:40 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:18:39.635 12:16:40 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:18:39.635 12:16:40 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:18:39.635 12:16:40 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:39.635 12:16:40 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:39.635 12:16:40 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:18:39.635 12:16:40 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=75371 00:18:39.636 12:16:40 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 75371 00:18:39.636 12:16:40 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 75371 ']' 00:18:39.636 12:16:40 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:39.636 12:16:40 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:39.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:39.636 12:16:40 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:39.636 12:16:40 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:39.636 12:16:40 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:18:39.636 12:16:40 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:39.636 [2024-11-25 12:16:40.530699] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:18:39.636 [2024-11-25 12:16:40.530815] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75371 ] 00:18:39.636 [2024-11-25 12:16:40.684735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:39.894 [2024-11-25 12:16:40.770023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:39.894 [2024-11-25 12:16:40.770099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:39.894 [2024-11-25 12:16:40.770077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:40.459 12:16:41 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:40.459 12:16:41 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:18:40.459 12:16:41 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:40.459 12:16:41 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:18:40.459 12:16:41 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:40.459 12:16:41 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:18:40.459 12:16:41 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:18:40.459 12:16:41 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:40.717 12:16:41 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:40.717 12:16:41 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:18:40.717 12:16:41 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:40.717 12:16:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:18:40.717 12:16:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:40.717 12:16:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:40.717 12:16:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:40.717 12:16:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:40.717 12:16:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:40.717 { 00:18:40.717 "name": "nvme0n1", 00:18:40.717 "aliases": [ 00:18:40.717 "e4c09572-db32-438e-a60d-de91e1a2a999" 00:18:40.717 ], 00:18:40.717 "product_name": "NVMe disk", 00:18:40.717 "block_size": 4096, 00:18:40.717 "num_blocks": 1310720, 00:18:40.717 "uuid": "e4c09572-db32-438e-a60d-de91e1a2a999", 00:18:40.717 "numa_id": -1, 00:18:40.717 "assigned_rate_limits": { 00:18:40.717 "rw_ios_per_sec": 0, 00:18:40.717 "rw_mbytes_per_sec": 0, 00:18:40.717 "r_mbytes_per_sec": 0, 00:18:40.717 "w_mbytes_per_sec": 0 00:18:40.717 }, 00:18:40.717 "claimed": false, 00:18:40.717 "zoned": false, 00:18:40.717 "supported_io_types": { 00:18:40.717 "read": true, 00:18:40.717 "write": true, 00:18:40.717 "unmap": true, 00:18:40.717 "flush": true, 00:18:40.717 "reset": true, 00:18:40.717 "nvme_admin": true, 00:18:40.717 "nvme_io": true, 00:18:40.717 "nvme_io_md": false, 00:18:40.717 "write_zeroes": true, 00:18:40.717 "zcopy": false, 00:18:40.717 "get_zone_info": false, 00:18:40.718 "zone_management": false, 00:18:40.718 "zone_append": false, 00:18:40.718 "compare": true, 00:18:40.718 "compare_and_write": false, 00:18:40.718 "abort": true, 00:18:40.718 "seek_hole": false, 00:18:40.718 "seek_data": false, 00:18:40.718 "copy": true, 00:18:40.718 "nvme_iov_md": false 00:18:40.718 }, 00:18:40.718 "driver_specific": { 00:18:40.718 "nvme": [ 00:18:40.718 { 00:18:40.718 "pci_address": "0000:00:11.0", 00:18:40.718 "trid": { 00:18:40.718 "trtype": "PCIe", 00:18:40.718 "traddr": "0000:00:11.0" 00:18:40.718 }, 00:18:40.718 "ctrlr_data": { 00:18:40.718 "cntlid": 0, 00:18:40.718 "vendor_id": "0x1b36", 00:18:40.718 "model_number": "QEMU NVMe Ctrl", 00:18:40.718 "serial_number": "12341", 00:18:40.718 "firmware_revision": "8.0.0", 00:18:40.718 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:40.718 "oacs": { 00:18:40.718 "security": 0, 00:18:40.718 "format": 1, 00:18:40.718 "firmware": 0, 00:18:40.718 "ns_manage": 1 00:18:40.718 }, 00:18:40.718 "multi_ctrlr": false, 00:18:40.718 "ana_reporting": false 00:18:40.718 }, 00:18:40.718 "vs": { 00:18:40.718 "nvme_version": "1.4" 00:18:40.718 }, 00:18:40.718 "ns_data": { 00:18:40.718 "id": 1, 00:18:40.718 "can_share": false 00:18:40.718 } 00:18:40.718 } 00:18:40.718 ], 00:18:40.718 "mp_policy": "active_passive" 00:18:40.718 } 00:18:40.718 } 00:18:40.718 ]' 00:18:40.718 12:16:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:40.718 12:16:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:40.718 12:16:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:40.976 12:16:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:18:40.976 12:16:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:18:40.976 12:16:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:18:40.976 12:16:41 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:18:40.976 12:16:41 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:40.976 12:16:41 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:18:40.976 12:16:41 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:40.976 12:16:41 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:41.233 12:16:42 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:18:41.233 12:16:42 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:41.233 12:16:42 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=0d086759-b8ef-4b30-a42f-8bb0c85dbc53 00:18:41.233 12:16:42 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 0d086759-b8ef-4b30-a42f-8bb0c85dbc53 00:18:41.490 12:16:42 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=fca361da-01ba-4099-aeed-b979533bacd3 00:18:41.490 12:16:42 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 fca361da-01ba-4099-aeed-b979533bacd3 00:18:41.490 12:16:42 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:18:41.490 12:16:42 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:41.490 12:16:42 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=fca361da-01ba-4099-aeed-b979533bacd3 00:18:41.490 12:16:42 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:18:41.490 12:16:42 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size fca361da-01ba-4099-aeed-b979533bacd3 00:18:41.491 12:16:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=fca361da-01ba-4099-aeed-b979533bacd3 00:18:41.491 12:16:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:41.491 12:16:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:41.491 12:16:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:41.491 12:16:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fca361da-01ba-4099-aeed-b979533bacd3 00:18:41.748 12:16:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:41.748 { 00:18:41.748 "name": "fca361da-01ba-4099-aeed-b979533bacd3", 00:18:41.748 "aliases": [ 00:18:41.748 "lvs/nvme0n1p0" 00:18:41.748 ], 00:18:41.748 "product_name": "Logical Volume", 00:18:41.748 "block_size": 4096, 00:18:41.748 "num_blocks": 26476544, 00:18:41.748 "uuid": "fca361da-01ba-4099-aeed-b979533bacd3", 00:18:41.748 "assigned_rate_limits": { 00:18:41.748 "rw_ios_per_sec": 0, 00:18:41.748 "rw_mbytes_per_sec": 0, 00:18:41.748 "r_mbytes_per_sec": 0, 00:18:41.748 "w_mbytes_per_sec": 0 00:18:41.748 }, 00:18:41.748 "claimed": false, 00:18:41.748 "zoned": false, 00:18:41.748 "supported_io_types": { 00:18:41.748 "read": true, 00:18:41.748 "write": true, 00:18:41.748 "unmap": true, 00:18:41.748 "flush": false, 00:18:41.748 "reset": true, 00:18:41.748 "nvme_admin": false, 00:18:41.748 "nvme_io": false, 00:18:41.748 "nvme_io_md": false, 00:18:41.748 "write_zeroes": true, 00:18:41.748 "zcopy": false, 00:18:41.748 "get_zone_info": false, 00:18:41.748 "zone_management": false, 00:18:41.748 "zone_append": false, 00:18:41.748 "compare": false, 00:18:41.748 "compare_and_write": false, 00:18:41.748 "abort": false, 00:18:41.748 "seek_hole": true, 00:18:41.748 "seek_data": true, 00:18:41.748 "copy": false, 00:18:41.748 "nvme_iov_md": false 00:18:41.748 }, 00:18:41.748 "driver_specific": { 00:18:41.748 "lvol": { 00:18:41.748 "lvol_store_uuid": "0d086759-b8ef-4b30-a42f-8bb0c85dbc53", 00:18:41.748 "base_bdev": "nvme0n1", 00:18:41.748 "thin_provision": true, 00:18:41.748 "num_allocated_clusters": 0, 00:18:41.748 "snapshot": false, 00:18:41.748 "clone": false, 00:18:41.748 "esnap_clone": false 00:18:41.748 } 00:18:41.748 } 00:18:41.748 } 00:18:41.748 ]' 00:18:41.748 12:16:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:41.748 12:16:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:41.748 12:16:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:41.748 12:16:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:41.748 12:16:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:41.748 12:16:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:18:41.748 12:16:42 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:18:41.748 12:16:42 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:18:41.748 12:16:42 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:42.005 12:16:43 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:42.005 12:16:43 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:42.005 12:16:43 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size fca361da-01ba-4099-aeed-b979533bacd3 00:18:42.005 12:16:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=fca361da-01ba-4099-aeed-b979533bacd3 00:18:42.005 12:16:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:42.005 12:16:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:42.005 12:16:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:42.005 12:16:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fca361da-01ba-4099-aeed-b979533bacd3 00:18:42.263 12:16:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:42.263 { 00:18:42.263 "name": "fca361da-01ba-4099-aeed-b979533bacd3", 00:18:42.263 "aliases": [ 00:18:42.263 "lvs/nvme0n1p0" 00:18:42.263 ], 00:18:42.263 "product_name": "Logical Volume", 00:18:42.263 "block_size": 4096, 00:18:42.263 "num_blocks": 26476544, 00:18:42.263 "uuid": "fca361da-01ba-4099-aeed-b979533bacd3", 00:18:42.263 "assigned_rate_limits": { 00:18:42.263 "rw_ios_per_sec": 0, 00:18:42.263 "rw_mbytes_per_sec": 0, 00:18:42.263 "r_mbytes_per_sec": 0, 00:18:42.264 "w_mbytes_per_sec": 0 00:18:42.264 }, 00:18:42.264 "claimed": false, 00:18:42.264 "zoned": false, 00:18:42.264 "supported_io_types": { 00:18:42.264 "read": true, 00:18:42.264 "write": true, 00:18:42.264 "unmap": true, 00:18:42.264 "flush": false, 00:18:42.264 "reset": true, 00:18:42.264 "nvme_admin": false, 00:18:42.264 "nvme_io": false, 00:18:42.264 "nvme_io_md": false, 00:18:42.264 "write_zeroes": true, 00:18:42.264 "zcopy": false, 00:18:42.264 "get_zone_info": false, 00:18:42.264 "zone_management": false, 00:18:42.264 "zone_append": false, 00:18:42.264 "compare": false, 00:18:42.264 "compare_and_write": false, 00:18:42.264 "abort": false, 00:18:42.264 "seek_hole": true, 00:18:42.264 "seek_data": true, 00:18:42.264 "copy": false, 00:18:42.264 "nvme_iov_md": false 00:18:42.264 }, 00:18:42.264 "driver_specific": { 00:18:42.264 "lvol": { 00:18:42.264 "lvol_store_uuid": "0d086759-b8ef-4b30-a42f-8bb0c85dbc53", 00:18:42.264 "base_bdev": "nvme0n1", 00:18:42.264 "thin_provision": true, 00:18:42.264 "num_allocated_clusters": 0, 00:18:42.264 "snapshot": false, 00:18:42.264 "clone": false, 00:18:42.264 "esnap_clone": false 00:18:42.264 } 00:18:42.264 } 00:18:42.264 } 00:18:42.264 ]' 00:18:42.264 12:16:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:42.264 12:16:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:42.264 12:16:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:42.264 12:16:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:42.264 12:16:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:42.264 12:16:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:18:42.264 12:16:43 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:18:42.264 12:16:43 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:42.522 12:16:43 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:18:42.522 12:16:43 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:18:42.522 12:16:43 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:18:42.522 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:18:42.522 12:16:43 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size fca361da-01ba-4099-aeed-b979533bacd3 00:18:42.522 12:16:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=fca361da-01ba-4099-aeed-b979533bacd3 00:18:42.522 12:16:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:42.522 12:16:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:42.522 12:16:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:42.522 12:16:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fca361da-01ba-4099-aeed-b979533bacd3 00:18:42.780 12:16:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:42.780 { 00:18:42.780 "name": "fca361da-01ba-4099-aeed-b979533bacd3", 00:18:42.780 "aliases": [ 00:18:42.780 "lvs/nvme0n1p0" 00:18:42.780 ], 00:18:42.780 "product_name": "Logical Volume", 00:18:42.780 "block_size": 4096, 00:18:42.780 "num_blocks": 26476544, 00:18:42.780 "uuid": "fca361da-01ba-4099-aeed-b979533bacd3", 00:18:42.780 "assigned_rate_limits": { 00:18:42.780 "rw_ios_per_sec": 0, 00:18:42.780 "rw_mbytes_per_sec": 0, 00:18:42.780 "r_mbytes_per_sec": 0, 00:18:42.780 "w_mbytes_per_sec": 0 00:18:42.780 }, 00:18:42.780 "claimed": false, 00:18:42.780 "zoned": false, 00:18:42.780 "supported_io_types": { 00:18:42.780 "read": true, 00:18:42.780 "write": true, 00:18:42.780 "unmap": true, 00:18:42.780 "flush": false, 00:18:42.780 "reset": true, 00:18:42.780 "nvme_admin": false, 00:18:42.780 "nvme_io": false, 00:18:42.780 "nvme_io_md": false, 00:18:42.780 "write_zeroes": true, 00:18:42.780 "zcopy": false, 00:18:42.780 "get_zone_info": false, 00:18:42.780 "zone_management": false, 00:18:42.780 "zone_append": false, 00:18:42.780 "compare": false, 00:18:42.780 "compare_and_write": false, 00:18:42.780 "abort": false, 00:18:42.780 "seek_hole": true, 00:18:42.780 "seek_data": true, 00:18:42.780 "copy": false, 00:18:42.780 "nvme_iov_md": false 00:18:42.780 }, 00:18:42.780 "driver_specific": { 00:18:42.780 "lvol": { 00:18:42.780 "lvol_store_uuid": "0d086759-b8ef-4b30-a42f-8bb0c85dbc53", 00:18:42.780 "base_bdev": "nvme0n1", 00:18:42.780 "thin_provision": true, 00:18:42.780 "num_allocated_clusters": 0, 00:18:42.780 "snapshot": false, 00:18:42.780 "clone": false, 00:18:42.780 "esnap_clone": false 00:18:42.780 } 00:18:42.780 } 00:18:42.780 } 00:18:42.780 ]' 00:18:42.780 12:16:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:42.780 12:16:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:42.780 12:16:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:42.780 12:16:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:42.780 12:16:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:42.780 12:16:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:18:42.780 12:16:43 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:18:42.780 12:16:43 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:18:42.780 12:16:43 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d fca361da-01ba-4099-aeed-b979533bacd3 -c nvc0n1p0 --l2p_dram_limit 60 00:18:43.039 [2024-11-25 12:16:43.952327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.039 [2024-11-25 12:16:43.952608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:43.039 [2024-11-25 12:16:43.952628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:43.039 [2024-11-25 12:16:43.952636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.039 [2024-11-25 12:16:43.952697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.040 [2024-11-25 12:16:43.952707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:43.040 [2024-11-25 12:16:43.952715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:18:43.040 [2024-11-25 12:16:43.952721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.040 [2024-11-25 12:16:43.952752] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:43.040 [2024-11-25 12:16:43.953439] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:43.040 [2024-11-25 12:16:43.953464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.040 [2024-11-25 12:16:43.953471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:43.040 [2024-11-25 12:16:43.953480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.726 ms 00:18:43.040 [2024-11-25 12:16:43.953486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.040 [2024-11-25 12:16:43.953550] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID e175993f-d09a-4fb6-befc-a06895ef72e1 00:18:43.040 [2024-11-25 12:16:43.954547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.040 [2024-11-25 12:16:43.954573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:43.040 [2024-11-25 12:16:43.954580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:18:43.040 [2024-11-25 12:16:43.954587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.040 [2024-11-25 12:16:43.959417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.040 [2024-11-25 12:16:43.959443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:43.040 [2024-11-25 12:16:43.959452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.778 ms 00:18:43.040 [2024-11-25 12:16:43.959460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.040 [2024-11-25 12:16:43.959542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.040 [2024-11-25 12:16:43.959550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:43.040 [2024-11-25 12:16:43.959557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:18:43.040 [2024-11-25 12:16:43.959567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.040 [2024-11-25 12:16:43.959608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.040 [2024-11-25 12:16:43.959620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:43.040 [2024-11-25 12:16:43.959627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:43.040 [2024-11-25 12:16:43.959634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.040 [2024-11-25 12:16:43.959660] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:43.040 [2024-11-25 12:16:43.962610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.040 [2024-11-25 12:16:43.962635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:43.040 [2024-11-25 12:16:43.962647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.954 ms 00:18:43.040 [2024-11-25 12:16:43.962656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.040 [2024-11-25 12:16:43.962686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.040 [2024-11-25 12:16:43.962694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:43.040 [2024-11-25 12:16:43.962702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:43.040 [2024-11-25 12:16:43.962709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.040 [2024-11-25 12:16:43.962750] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:43.040 [2024-11-25 12:16:43.962872] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:43.040 [2024-11-25 12:16:43.962885] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:43.040 [2024-11-25 12:16:43.962894] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:43.040 [2024-11-25 12:16:43.962904] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:43.040 [2024-11-25 12:16:43.962912] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:43.040 [2024-11-25 12:16:43.962921] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:43.040 [2024-11-25 12:16:43.962927] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:43.040 [2024-11-25 12:16:43.962935] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:43.040 [2024-11-25 12:16:43.962941] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:43.040 [2024-11-25 12:16:43.962959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.040 [2024-11-25 12:16:43.962969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:43.040 [2024-11-25 12:16:43.962977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 00:18:43.040 [2024-11-25 12:16:43.962984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.040 [2024-11-25 12:16:43.963059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.040 [2024-11-25 12:16:43.963066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:43.040 [2024-11-25 12:16:43.963074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:18:43.040 [2024-11-25 12:16:43.963081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.040 [2024-11-25 12:16:43.963173] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:43.040 [2024-11-25 12:16:43.963181] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:43.040 [2024-11-25 12:16:43.963191] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:43.040 [2024-11-25 12:16:43.963198] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:43.040 [2024-11-25 12:16:43.963206] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:43.040 [2024-11-25 12:16:43.963212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:43.040 [2024-11-25 12:16:43.963220] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:43.040 [2024-11-25 12:16:43.963226] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:43.040 [2024-11-25 12:16:43.963235] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:43.040 [2024-11-25 12:16:43.963240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:43.040 [2024-11-25 12:16:43.963247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:43.040 [2024-11-25 12:16:43.963254] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:43.040 [2024-11-25 12:16:43.963261] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:43.040 [2024-11-25 12:16:43.963267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:43.040 [2024-11-25 12:16:43.963275] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:43.040 [2024-11-25 12:16:43.963280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:43.040 [2024-11-25 12:16:43.963290] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:43.040 [2024-11-25 12:16:43.963296] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:43.040 [2024-11-25 12:16:43.963303] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:43.040 [2024-11-25 12:16:43.963309] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:43.040 [2024-11-25 12:16:43.963317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:43.040 [2024-11-25 12:16:43.963323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:43.040 [2024-11-25 12:16:43.963330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:43.040 [2024-11-25 12:16:43.963336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:43.040 [2024-11-25 12:16:43.963343] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:43.040 [2024-11-25 12:16:43.963350] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:43.040 [2024-11-25 12:16:43.963360] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:43.040 [2024-11-25 12:16:43.963366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:43.040 [2024-11-25 12:16:43.963373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:43.040 [2024-11-25 12:16:43.963379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:43.040 [2024-11-25 12:16:43.963385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:43.040 [2024-11-25 12:16:43.963391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:43.040 [2024-11-25 12:16:43.963399] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:43.040 [2024-11-25 12:16:43.963405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:43.040 [2024-11-25 12:16:43.963412] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:43.040 [2024-11-25 12:16:43.963427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:43.040 [2024-11-25 12:16:43.963434] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:43.040 [2024-11-25 12:16:43.963439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:43.040 [2024-11-25 12:16:43.963446] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:43.040 [2024-11-25 12:16:43.963451] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:43.040 [2024-11-25 12:16:43.963458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:43.040 [2024-11-25 12:16:43.963463] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:43.040 [2024-11-25 12:16:43.963470] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:43.040 [2024-11-25 12:16:43.963474] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:43.040 [2024-11-25 12:16:43.963481] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:43.040 [2024-11-25 12:16:43.963487] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:43.040 [2024-11-25 12:16:43.963494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:43.040 [2024-11-25 12:16:43.963500] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:43.040 [2024-11-25 12:16:43.963509] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:43.040 [2024-11-25 12:16:43.963514] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:43.040 [2024-11-25 12:16:43.963521] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:43.040 [2024-11-25 12:16:43.963526] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:43.041 [2024-11-25 12:16:43.963532] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:43.041 [2024-11-25 12:16:43.963540] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:43.041 [2024-11-25 12:16:43.963549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:43.041 [2024-11-25 12:16:43.963556] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:43.041 [2024-11-25 12:16:43.963563] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:43.041 [2024-11-25 12:16:43.963569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:43.041 [2024-11-25 12:16:43.963578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:43.041 [2024-11-25 12:16:43.963584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:43.041 [2024-11-25 12:16:43.963591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:43.041 [2024-11-25 12:16:43.963596] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:43.041 [2024-11-25 12:16:43.963603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:43.041 [2024-11-25 12:16:43.963610] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:43.041 [2024-11-25 12:16:43.963619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:43.041 [2024-11-25 12:16:43.963624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:43.041 [2024-11-25 12:16:43.963631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:43.041 [2024-11-25 12:16:43.963636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:43.041 [2024-11-25 12:16:43.963643] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:43.041 [2024-11-25 12:16:43.963649] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:43.041 [2024-11-25 12:16:43.963656] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:43.041 [2024-11-25 12:16:43.963664] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:43.041 [2024-11-25 12:16:43.963670] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:43.041 [2024-11-25 12:16:43.963676] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:43.041 [2024-11-25 12:16:43.963684] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:43.041 [2024-11-25 12:16:43.963690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.041 [2024-11-25 12:16:43.963697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:43.041 [2024-11-25 12:16:43.963702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.572 ms 00:18:43.041 [2024-11-25 12:16:43.963709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.041 [2024-11-25 12:16:43.963765] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:43.041 [2024-11-25 12:16:43.963776] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:45.634 [2024-11-25 12:16:46.237418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.634 [2024-11-25 12:16:46.237475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:45.634 [2024-11-25 12:16:46.237492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2273.642 ms 00:18:45.634 [2024-11-25 12:16:46.237502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.634 [2024-11-25 12:16:46.263147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.634 [2024-11-25 12:16:46.263193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:45.634 [2024-11-25 12:16:46.263205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.449 ms 00:18:45.634 [2024-11-25 12:16:46.263215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.634 [2024-11-25 12:16:46.263351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.634 [2024-11-25 12:16:46.263363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:45.634 [2024-11-25 12:16:46.263372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:18:45.634 [2024-11-25 12:16:46.263383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.634 [2024-11-25 12:16:46.301842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.634 [2024-11-25 12:16:46.301891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:45.634 [2024-11-25 12:16:46.301908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.412 ms 00:18:45.634 [2024-11-25 12:16:46.301920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.634 [2024-11-25 12:16:46.301980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.634 [2024-11-25 12:16:46.301992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:45.634 [2024-11-25 12:16:46.302002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:45.634 [2024-11-25 12:16:46.302011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.634 [2024-11-25 12:16:46.302392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.634 [2024-11-25 12:16:46.302411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:45.634 [2024-11-25 12:16:46.302420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:18:45.634 [2024-11-25 12:16:46.302432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.634 [2024-11-25 12:16:46.302576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.634 [2024-11-25 12:16:46.302587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:45.634 [2024-11-25 12:16:46.302595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:18:45.634 [2024-11-25 12:16:46.302607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.634 [2024-11-25 12:16:46.317208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.634 [2024-11-25 12:16:46.317240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:45.634 [2024-11-25 12:16:46.317250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.579 ms 00:18:45.634 [2024-11-25 12:16:46.317260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.634 [2024-11-25 12:16:46.328507] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:45.634 [2024-11-25 12:16:46.342977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.634 [2024-11-25 12:16:46.343023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:45.634 [2024-11-25 12:16:46.343037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.611 ms 00:18:45.634 [2024-11-25 12:16:46.343048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.634 [2024-11-25 12:16:46.396649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.634 [2024-11-25 12:16:46.396699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:45.634 [2024-11-25 12:16:46.396717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.556 ms 00:18:45.634 [2024-11-25 12:16:46.396726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.634 [2024-11-25 12:16:46.396893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.634 [2024-11-25 12:16:46.396902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:45.634 [2024-11-25 12:16:46.396915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:18:45.634 [2024-11-25 12:16:46.396922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.634 [2024-11-25 12:16:46.419883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.634 [2024-11-25 12:16:46.419928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:45.634 [2024-11-25 12:16:46.419941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.891 ms 00:18:45.634 [2024-11-25 12:16:46.419957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.635 [2024-11-25 12:16:46.442432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.635 [2024-11-25 12:16:46.442467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:45.635 [2024-11-25 12:16:46.442481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.440 ms 00:18:45.635 [2024-11-25 12:16:46.442489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.635 [2024-11-25 12:16:46.443053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.635 [2024-11-25 12:16:46.443067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:45.635 [2024-11-25 12:16:46.443078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.536 ms 00:18:45.635 [2024-11-25 12:16:46.443085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.635 [2024-11-25 12:16:46.517436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.635 [2024-11-25 12:16:46.517481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:45.635 [2024-11-25 12:16:46.517499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.302 ms 00:18:45.635 [2024-11-25 12:16:46.517510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.635 [2024-11-25 12:16:46.541689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.635 [2024-11-25 12:16:46.541730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:45.635 [2024-11-25 12:16:46.541744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.074 ms 00:18:45.635 [2024-11-25 12:16:46.541753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.635 [2024-11-25 12:16:46.564938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.635 [2024-11-25 12:16:46.564985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:45.635 [2024-11-25 12:16:46.565000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.140 ms 00:18:45.635 [2024-11-25 12:16:46.565008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.635 [2024-11-25 12:16:46.588039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.635 [2024-11-25 12:16:46.588080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:45.635 [2024-11-25 12:16:46.588095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.986 ms 00:18:45.635 [2024-11-25 12:16:46.588103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.635 [2024-11-25 12:16:46.588149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.635 [2024-11-25 12:16:46.588158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:45.635 [2024-11-25 12:16:46.588170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:45.635 [2024-11-25 12:16:46.588180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.635 [2024-11-25 12:16:46.588263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.635 [2024-11-25 12:16:46.588272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:45.635 [2024-11-25 12:16:46.588282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:18:45.635 [2024-11-25 12:16:46.588289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.635 [2024-11-25 12:16:46.589155] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2636.394 ms, result 0 00:18:45.635 { 00:18:45.635 "name": "ftl0", 00:18:45.635 "uuid": "e175993f-d09a-4fb6-befc-a06895ef72e1" 00:18:45.635 } 00:18:45.635 12:16:46 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:18:45.635 12:16:46 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:18:45.635 12:16:46 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:45.635 12:16:46 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:18:45.635 12:16:46 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:45.635 12:16:46 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:45.635 12:16:46 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:45.893 12:16:46 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:18:46.150 [ 00:18:46.150 { 00:18:46.150 "name": "ftl0", 00:18:46.150 "aliases": [ 00:18:46.150 "e175993f-d09a-4fb6-befc-a06895ef72e1" 00:18:46.150 ], 00:18:46.150 "product_name": "FTL disk", 00:18:46.150 "block_size": 4096, 00:18:46.150 "num_blocks": 20971520, 00:18:46.150 "uuid": "e175993f-d09a-4fb6-befc-a06895ef72e1", 00:18:46.150 "assigned_rate_limits": { 00:18:46.150 "rw_ios_per_sec": 0, 00:18:46.150 "rw_mbytes_per_sec": 0, 00:18:46.150 "r_mbytes_per_sec": 0, 00:18:46.150 "w_mbytes_per_sec": 0 00:18:46.150 }, 00:18:46.150 "claimed": false, 00:18:46.150 "zoned": false, 00:18:46.150 "supported_io_types": { 00:18:46.150 "read": true, 00:18:46.150 "write": true, 00:18:46.150 "unmap": true, 00:18:46.150 "flush": true, 00:18:46.150 "reset": false, 00:18:46.150 "nvme_admin": false, 00:18:46.150 "nvme_io": false, 00:18:46.150 "nvme_io_md": false, 00:18:46.150 "write_zeroes": true, 00:18:46.150 "zcopy": false, 00:18:46.150 "get_zone_info": false, 00:18:46.150 "zone_management": false, 00:18:46.150 "zone_append": false, 00:18:46.150 "compare": false, 00:18:46.150 "compare_and_write": false, 00:18:46.150 "abort": false, 00:18:46.150 "seek_hole": false, 00:18:46.150 "seek_data": false, 00:18:46.150 "copy": false, 00:18:46.150 "nvme_iov_md": false 00:18:46.150 }, 00:18:46.150 "driver_specific": { 00:18:46.150 "ftl": { 00:18:46.150 "base_bdev": "fca361da-01ba-4099-aeed-b979533bacd3", 00:18:46.150 "cache": "nvc0n1p0" 00:18:46.150 } 00:18:46.150 } 00:18:46.150 } 00:18:46.150 ] 00:18:46.150 12:16:47 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:18:46.150 12:16:47 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:18:46.150 12:16:47 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:46.408 12:16:47 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:18:46.408 12:16:47 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:46.667 [2024-11-25 12:16:47.490183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.667 [2024-11-25 12:16:47.490237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:46.667 [2024-11-25 12:16:47.490251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:46.667 [2024-11-25 12:16:47.490261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.667 [2024-11-25 12:16:47.490301] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:46.667 [2024-11-25 12:16:47.492910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.667 [2024-11-25 12:16:47.492944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:46.667 [2024-11-25 12:16:47.492965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.589 ms 00:18:46.667 [2024-11-25 12:16:47.492974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.667 [2024-11-25 12:16:47.493420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.667 [2024-11-25 12:16:47.493440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:46.667 [2024-11-25 12:16:47.493451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.410 ms 00:18:46.667 [2024-11-25 12:16:47.493459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.667 [2024-11-25 12:16:47.496695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.667 [2024-11-25 12:16:47.496718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:46.667 [2024-11-25 12:16:47.496729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.212 ms 00:18:46.667 [2024-11-25 12:16:47.496737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.667 [2024-11-25 12:16:47.502869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.667 [2024-11-25 12:16:47.502898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:46.667 [2024-11-25 12:16:47.502910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.107 ms 00:18:46.667 [2024-11-25 12:16:47.502919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.667 [2024-11-25 12:16:47.525964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.667 [2024-11-25 12:16:47.526001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:46.667 [2024-11-25 12:16:47.526014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.937 ms 00:18:46.667 [2024-11-25 12:16:47.526022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.667 [2024-11-25 12:16:47.540257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.667 [2024-11-25 12:16:47.540292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:46.667 [2024-11-25 12:16:47.540306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.170 ms 00:18:46.667 [2024-11-25 12:16:47.540316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.667 [2024-11-25 12:16:47.540497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.667 [2024-11-25 12:16:47.540513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:46.667 [2024-11-25 12:16:47.540524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:18:46.667 [2024-11-25 12:16:47.540531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.668 [2024-11-25 12:16:47.563179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.668 [2024-11-25 12:16:47.563218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:46.668 [2024-11-25 12:16:47.563232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.621 ms 00:18:46.668 [2024-11-25 12:16:47.563241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.668 [2024-11-25 12:16:47.585244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.668 [2024-11-25 12:16:47.585287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:46.668 [2024-11-25 12:16:47.585301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.954 ms 00:18:46.668 [2024-11-25 12:16:47.585309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.668 [2024-11-25 12:16:47.607245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.668 [2024-11-25 12:16:47.607279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:46.668 [2024-11-25 12:16:47.607291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.890 ms 00:18:46.668 [2024-11-25 12:16:47.607299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.668 [2024-11-25 12:16:47.629504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.668 [2024-11-25 12:16:47.629537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:46.668 [2024-11-25 12:16:47.629550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.110 ms 00:18:46.668 [2024-11-25 12:16:47.629557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.668 [2024-11-25 12:16:47.629600] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:46.668 [2024-11-25 12:16:47.629615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:46.668 [2024-11-25 12:16:47.629627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:46.668 [2024-11-25 12:16:47.629636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:46.668 [2024-11-25 12:16:47.629646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:46.668 [2024-11-25 12:16:47.629653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:46.668 [2024-11-25 12:16:47.629662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:46.668 [2024-11-25 12:16:47.629670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:46.668 [2024-11-25 12:16:47.629681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:46.668 [2024-11-25 12:16:47.629688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:46.668 [2024-11-25 12:16:47.629698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:46.668 [2024-11-25 12:16:47.629705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:46.668 [2024-11-25 12:16:47.629715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:46.668 [2024-11-25 12:16:47.629722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:46.668 [2024-11-25 12:16:47.629731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:46.668 [2024-11-25 12:16:47.629738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:46.668 [2024-11-25 12:16:47.629747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:46.668 [2024-11-25 12:16:47.629755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:46.668 [2024-11-25 12:16:47.629763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:46.668 [2024-11-25 12:16:47.629770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:46.668 [2024-11-25 12:16:47.629780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:46.668 [2024-11-25 12:16:47.629787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:46.668 [2024-11-25 12:16:47.629798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:46.668 [2024-11-25 12:16:47.629805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:46.668 [2024-11-25 12:16:47.629815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:46.668 [2024-11-25 12:16:47.629822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:46.668 [2024-11-25 12:16:47.629831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:46.668 [2024-11-25 12:16:47.629838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:46.668 [2024-11-25 12:16:47.629847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:46.668 [2024-11-25 12:16:47.629854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:46.668 [2024-11-25 12:16:47.629863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:46.668 [2024-11-25 12:16:47.629870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:46.668 [2024-11-25 12:16:47.629884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:46.668 [2024-11-25 12:16:47.629892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:46.668 [2024-11-25 12:16:47.629901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:46.668 [2024-11-25 12:16:47.629909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:46.668 [2024-11-25 12:16:47.629918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:46.668 [2024-11-25 12:16:47.629925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:46.668 [2024-11-25 12:16:47.629934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:46.668 [2024-11-25 12:16:47.629942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:46.668 [2024-11-25 12:16:47.629963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:46.668 [2024-11-25 12:16:47.629970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:46.668 [2024-11-25 12:16:47.629980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:46.668 [2024-11-25 12:16:47.629987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:46.668 [2024-11-25 12:16:47.629996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:46.668 [2024-11-25 12:16:47.630004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:46.668 [2024-11-25 12:16:47.630013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:46.668 [2024-11-25 12:16:47.630020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:46.668 [2024-11-25 12:16:47.630030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:46.668 [2024-11-25 12:16:47.630038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:46.668 [2024-11-25 12:16:47.630047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:46.668 [2024-11-25 12:16:47.630055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:46.668 [2024-11-25 12:16:47.630064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:46.668 [2024-11-25 12:16:47.630072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:46.668 [2024-11-25 12:16:47.630081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:46.668 [2024-11-25 12:16:47.630089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:46.668 [2024-11-25 12:16:47.630099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:46.668 [2024-11-25 12:16:47.630107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:46.668 [2024-11-25 12:16:47.630115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:46.668 [2024-11-25 12:16:47.630123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:46.668 [2024-11-25 12:16:47.630132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:46.668 [2024-11-25 12:16:47.630139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:46.668 [2024-11-25 12:16:47.630148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:46.668 [2024-11-25 12:16:47.630155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:46.668 [2024-11-25 12:16:47.630166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:46.668 [2024-11-25 12:16:47.630173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:46.668 [2024-11-25 12:16:47.630182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:46.668 [2024-11-25 12:16:47.630190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:46.668 [2024-11-25 12:16:47.630199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:46.668 [2024-11-25 12:16:47.630206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:46.668 [2024-11-25 12:16:47.630215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:46.668 [2024-11-25 12:16:47.630222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:46.668 [2024-11-25 12:16:47.630233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:46.668 [2024-11-25 12:16:47.630240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:46.668 [2024-11-25 12:16:47.630250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:46.668 [2024-11-25 12:16:47.630258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:46.668 [2024-11-25 12:16:47.630267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:46.669 [2024-11-25 12:16:47.630274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:46.669 [2024-11-25 12:16:47.630282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:46.669 [2024-11-25 12:16:47.630290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:46.669 [2024-11-25 12:16:47.630299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:46.669 [2024-11-25 12:16:47.630306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:46.669 [2024-11-25 12:16:47.630316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:46.669 [2024-11-25 12:16:47.630323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:46.669 [2024-11-25 12:16:47.630346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:46.669 [2024-11-25 12:16:47.630353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:46.669 [2024-11-25 12:16:47.630362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:46.669 [2024-11-25 12:16:47.630370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:46.669 [2024-11-25 12:16:47.630381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:46.669 [2024-11-25 12:16:47.630389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:46.669 [2024-11-25 12:16:47.630398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:46.669 [2024-11-25 12:16:47.630405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:46.669 [2024-11-25 12:16:47.630414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:46.669 [2024-11-25 12:16:47.630422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:46.669 [2024-11-25 12:16:47.630430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:46.669 [2024-11-25 12:16:47.630438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:46.669 [2024-11-25 12:16:47.630451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:46.669 [2024-11-25 12:16:47.630463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:46.669 [2024-11-25 12:16:47.630472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:46.669 [2024-11-25 12:16:47.630480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:46.669 [2024-11-25 12:16:47.630490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:46.669 [2024-11-25 12:16:47.630508] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:46.669 [2024-11-25 12:16:47.630517] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e175993f-d09a-4fb6-befc-a06895ef72e1 00:18:46.669 [2024-11-25 12:16:47.630525] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:46.669 [2024-11-25 12:16:47.630535] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:46.669 [2024-11-25 12:16:47.630542] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:46.669 [2024-11-25 12:16:47.630554] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:46.669 [2024-11-25 12:16:47.630560] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:46.669 [2024-11-25 12:16:47.630569] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:46.669 [2024-11-25 12:16:47.630577] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:46.669 [2024-11-25 12:16:47.630585] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:46.669 [2024-11-25 12:16:47.630591] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:46.669 [2024-11-25 12:16:47.630600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.669 [2024-11-25 12:16:47.630607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:46.669 [2024-11-25 12:16:47.630617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.001 ms 00:18:46.669 [2024-11-25 12:16:47.630625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.669 [2024-11-25 12:16:47.642963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.669 [2024-11-25 12:16:47.642999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:46.669 [2024-11-25 12:16:47.643012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.299 ms 00:18:46.669 [2024-11-25 12:16:47.643020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.669 [2024-11-25 12:16:47.643366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.669 [2024-11-25 12:16:47.643385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:46.669 [2024-11-25 12:16:47.643396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:18:46.669 [2024-11-25 12:16:47.643404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.669 [2024-11-25 12:16:47.686673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:46.669 [2024-11-25 12:16:47.686724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:46.669 [2024-11-25 12:16:47.686736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:46.669 [2024-11-25 12:16:47.686745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.669 [2024-11-25 12:16:47.686812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:46.669 [2024-11-25 12:16:47.686820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:46.669 [2024-11-25 12:16:47.686829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:46.669 [2024-11-25 12:16:47.686837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.669 [2024-11-25 12:16:47.686932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:46.669 [2024-11-25 12:16:47.686942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:46.669 [2024-11-25 12:16:47.686965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:46.669 [2024-11-25 12:16:47.686972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.669 [2024-11-25 12:16:47.686999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:46.669 [2024-11-25 12:16:47.687007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:46.669 [2024-11-25 12:16:47.687017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:46.669 [2024-11-25 12:16:47.687024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.927 [2024-11-25 12:16:47.767265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:46.927 [2024-11-25 12:16:47.767319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:46.927 [2024-11-25 12:16:47.767331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:46.927 [2024-11-25 12:16:47.767338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.927 [2024-11-25 12:16:47.829173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:46.927 [2024-11-25 12:16:47.829221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:46.927 [2024-11-25 12:16:47.829233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:46.927 [2024-11-25 12:16:47.829242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.927 [2024-11-25 12:16:47.829339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:46.927 [2024-11-25 12:16:47.829349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:46.927 [2024-11-25 12:16:47.829358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:46.927 [2024-11-25 12:16:47.829368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.927 [2024-11-25 12:16:47.829431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:46.927 [2024-11-25 12:16:47.829443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:46.927 [2024-11-25 12:16:47.829453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:46.927 [2024-11-25 12:16:47.829464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.927 [2024-11-25 12:16:47.829561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:46.927 [2024-11-25 12:16:47.829576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:46.927 [2024-11-25 12:16:47.829586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:46.927 [2024-11-25 12:16:47.829592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.927 [2024-11-25 12:16:47.829648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:46.927 [2024-11-25 12:16:47.829657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:46.927 [2024-11-25 12:16:47.829666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:46.927 [2024-11-25 12:16:47.829674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.927 [2024-11-25 12:16:47.829717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:46.927 [2024-11-25 12:16:47.829729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:46.927 [2024-11-25 12:16:47.829738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:46.927 [2024-11-25 12:16:47.829749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.927 [2024-11-25 12:16:47.829799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:46.927 [2024-11-25 12:16:47.829814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:46.927 [2024-11-25 12:16:47.829824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:46.927 [2024-11-25 12:16:47.829831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.927 [2024-11-25 12:16:47.830008] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 339.788 ms, result 0 00:18:46.927 true 00:18:46.927 12:16:47 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 75371 00:18:46.927 12:16:47 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 75371 ']' 00:18:46.927 12:16:47 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 75371 00:18:46.927 12:16:47 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:18:46.927 12:16:47 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:46.927 12:16:47 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75371 00:18:46.927 killing process with pid 75371 00:18:46.927 12:16:47 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:46.927 12:16:47 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:46.927 12:16:47 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75371' 00:18:46.927 12:16:47 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 75371 00:18:46.927 12:16:47 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 75371 00:18:59.149 12:16:58 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:18:59.149 12:16:58 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:59.149 12:16:58 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:18:59.149 12:16:58 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:59.149 12:16:58 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:59.149 12:16:58 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:59.149 12:16:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:59.149 12:16:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:59.149 12:16:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:59.149 12:16:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:59.149 12:16:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:59.149 12:16:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:59.149 12:16:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:59.149 12:16:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:59.149 12:16:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:59.149 12:16:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:59.149 12:16:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:59.149 12:16:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:59.149 12:16:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:59.149 12:16:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:59.149 12:16:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:59.149 12:16:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:59.149 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:18:59.149 fio-3.35 00:18:59.149 Starting 1 thread 00:19:02.436 00:19:02.436 test: (groupid=0, jobs=1): err= 0: pid=75550: Mon Nov 25 12:17:03 2024 00:19:02.436 read: IOPS=1328, BW=88.2MiB/s (92.5MB/s)(255MiB/2885msec) 00:19:02.436 slat (nsec): min=4609, max=90207, avg=6019.32, stdev=2595.03 00:19:02.436 clat (usec): min=238, max=789, avg=337.23, stdev=36.98 00:19:02.436 lat (usec): min=244, max=795, avg=343.25, stdev=37.87 00:19:02.436 clat percentiles (usec): 00:19:02.436 | 1.00th=[ 260], 5.00th=[ 277], 10.00th=[ 310], 20.00th=[ 318], 00:19:02.436 | 30.00th=[ 330], 40.00th=[ 334], 50.00th=[ 334], 60.00th=[ 338], 00:19:02.436 | 70.00th=[ 338], 80.00th=[ 347], 90.00th=[ 367], 95.00th=[ 408], 00:19:02.436 | 99.00th=[ 478], 99.50th=[ 494], 99.90th=[ 570], 99.95th=[ 725], 00:19:02.436 | 99.99th=[ 791] 00:19:02.436 write: IOPS=1337, BW=88.8MiB/s (93.2MB/s)(256MiB/2882msec); 0 zone resets 00:19:02.436 slat (nsec): min=18425, max=82283, avg=26417.33, stdev=4564.16 00:19:02.436 clat (usec): min=281, max=869, avg=366.37, stdev=47.76 00:19:02.436 lat (usec): min=307, max=896, avg=392.79, stdev=48.18 00:19:02.436 clat percentiles (usec): 00:19:02.436 | 1.00th=[ 322], 5.00th=[ 326], 10.00th=[ 330], 20.00th=[ 347], 00:19:02.436 | 30.00th=[ 351], 40.00th=[ 355], 50.00th=[ 355], 60.00th=[ 359], 00:19:02.436 | 70.00th=[ 363], 80.00th=[ 371], 90.00th=[ 412], 95.00th=[ 441], 00:19:02.436 | 99.00th=[ 611], 99.50th=[ 644], 99.90th=[ 775], 99.95th=[ 857], 00:19:02.436 | 99.99th=[ 873] 00:19:02.436 bw ( KiB/s): min=87584, max=93704, per=100.00%, avg=91011.20, stdev=2607.87, samples=5 00:19:02.436 iops : min= 1288, max= 1378, avg=1338.40, stdev=38.35, samples=5 00:19:02.436 lat (usec) : 250=0.27%, 500=98.11%, 750=1.55%, 1000=0.07% 00:19:02.436 cpu : usr=98.96%, sys=0.28%, ctx=4, majf=0, minf=1169 00:19:02.436 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:02.436 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:02.436 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:02.436 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:02.436 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:02.436 00:19:02.436 Run status group 0 (all jobs): 00:19:02.436 READ: bw=88.2MiB/s (92.5MB/s), 88.2MiB/s-88.2MiB/s (92.5MB/s-92.5MB/s), io=255MiB (267MB), run=2885-2885msec 00:19:02.436 WRITE: bw=88.8MiB/s (93.2MB/s), 88.8MiB/s-88.8MiB/s (93.2MB/s-93.2MB/s), io=256MiB (269MB), run=2882-2882msec 00:19:03.809 ----------------------------------------------------- 00:19:03.809 Suppressions used: 00:19:03.809 count bytes template 00:19:03.809 1 5 /usr/src/fio/parse.c 00:19:03.809 1 8 libtcmalloc_minimal.so 00:19:03.809 1 904 libcrypto.so 00:19:03.809 ----------------------------------------------------- 00:19:03.809 00:19:03.809 12:17:04 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:19:03.809 12:17:04 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:03.809 12:17:04 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:03.809 12:17:04 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:19:03.809 12:17:04 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:19:03.809 12:17:04 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:03.809 12:17:04 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:03.809 12:17:04 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:19:03.809 12:17:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:19:03.809 12:17:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:03.809 12:17:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:03.809 12:17:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:03.809 12:17:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:03.810 12:17:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:19:03.810 12:17:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:03.810 12:17:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:03.810 12:17:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:03.810 12:17:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:03.810 12:17:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:19:03.810 12:17:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:03.810 12:17:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:03.810 12:17:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:19:03.810 12:17:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:03.810 12:17:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:19:03.810 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:19:03.810 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:19:03.810 fio-3.35 00:19:03.810 Starting 2 threads 00:19:30.373 00:19:30.373 first_half: (groupid=0, jobs=1): err= 0: pid=75645: Mon Nov 25 12:17:27 2024 00:19:30.373 read: IOPS=3024, BW=11.8MiB/s (12.4MB/s)(256MiB/21644msec) 00:19:30.373 slat (nsec): min=2996, max=31565, avg=5317.36, stdev=1198.16 00:19:30.373 clat (usec): min=519, max=280361, avg=35990.51, stdev=22111.14 00:19:30.373 lat (usec): min=523, max=280366, avg=35995.82, stdev=22111.29 00:19:30.373 clat percentiles (msec): 00:19:30.373 | 1.00th=[ 7], 5.00th=[ 28], 10.00th=[ 29], 20.00th=[ 30], 00:19:30.373 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 32], 00:19:30.373 | 70.00th=[ 33], 80.00th=[ 36], 90.00th=[ 40], 95.00th=[ 68], 00:19:30.373 | 99.00th=[ 153], 99.50th=[ 163], 99.90th=[ 199], 99.95th=[ 241], 00:19:30.373 | 99.99th=[ 275] 00:19:30.373 write: IOPS=3031, BW=11.8MiB/s (12.4MB/s)(256MiB/21619msec); 0 zone resets 00:19:30.373 slat (usec): min=3, max=216, avg= 6.70, stdev= 3.04 00:19:30.373 clat (usec): min=182, max=41089, avg=6291.75, stdev=6456.80 00:19:30.373 lat (usec): min=200, max=41096, avg=6298.45, stdev=6456.89 00:19:30.373 clat percentiles (usec): 00:19:30.373 | 1.00th=[ 701], 5.00th=[ 979], 10.00th=[ 1221], 20.00th=[ 2442], 00:19:30.373 | 30.00th=[ 3294], 40.00th=[ 4113], 50.00th=[ 4883], 60.00th=[ 5473], 00:19:30.373 | 70.00th=[ 6128], 80.00th=[ 7635], 90.00th=[10945], 95.00th=[22938], 00:19:30.373 | 99.00th=[33817], 99.50th=[36439], 99.90th=[39060], 99.95th=[40109], 00:19:30.373 | 99.99th=[40633] 00:19:30.373 bw ( KiB/s): min= 144, max=48672, per=97.61%, avg=23671.64, stdev=16866.71, samples=22 00:19:30.373 iops : min= 36, max=12168, avg=5917.91, stdev=4216.68, samples=22 00:19:30.373 lat (usec) : 250=0.01%, 500=0.07%, 750=0.73%, 1000=1.91% 00:19:30.373 lat (msec) : 2=6.00%, 4=10.62%, 10=25.35%, 20=4.09%, 50=48.02% 00:19:30.373 lat (msec) : 100=1.54%, 250=1.65%, 500=0.02% 00:19:30.373 cpu : usr=99.18%, sys=0.13%, ctx=27, majf=0, minf=5552 00:19:30.373 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:19:30.373 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.373 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:30.373 issued rwts: total=65468,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.373 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.373 second_half: (groupid=0, jobs=1): err= 0: pid=75646: Mon Nov 25 12:17:27 2024 00:19:30.373 read: IOPS=3048, BW=11.9MiB/s (12.5MB/s)(256MiB/21483msec) 00:19:30.373 slat (nsec): min=3092, max=57935, avg=4591.82, stdev=1237.80 00:19:30.373 clat (msec): min=8, max=188, avg=36.46, stdev=19.58 00:19:30.373 lat (msec): min=8, max=188, avg=36.46, stdev=19.58 00:19:30.373 clat percentiles (msec): 00:19:30.373 | 1.00th=[ 26], 5.00th=[ 28], 10.00th=[ 29], 20.00th=[ 30], 00:19:30.373 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 32], 00:19:30.373 | 70.00th=[ 34], 80.00th=[ 36], 90.00th=[ 41], 95.00th=[ 63], 00:19:30.373 | 99.00th=[ 146], 99.50th=[ 153], 99.90th=[ 171], 99.95th=[ 178], 00:19:30.373 | 99.99th=[ 186] 00:19:30.373 write: IOPS=3067, BW=12.0MiB/s (12.6MB/s)(256MiB/21363msec); 0 zone resets 00:19:30.373 slat (usec): min=3, max=269, avg= 6.16, stdev= 3.63 00:19:30.373 clat (usec): min=344, max=32672, avg=5511.88, stdev=3652.56 00:19:30.373 lat (usec): min=351, max=32677, avg=5518.04, stdev=3652.82 00:19:30.373 clat percentiles (usec): 00:19:30.373 | 1.00th=[ 799], 5.00th=[ 1467], 10.00th=[ 2376], 20.00th=[ 2966], 00:19:30.373 | 30.00th=[ 3654], 40.00th=[ 4228], 50.00th=[ 4817], 60.00th=[ 5342], 00:19:30.373 | 70.00th=[ 5866], 80.00th=[ 6915], 90.00th=[10290], 95.00th=[11338], 00:19:30.373 | 99.00th=[20055], 99.50th=[27919], 99.90th=[31851], 99.95th=[32113], 00:19:30.373 | 99.99th=[32375] 00:19:30.373 bw ( KiB/s): min= 1464, max=47576, per=100.00%, avg=24794.29, stdev=16409.48, samples=21 00:19:30.373 iops : min= 366, max=11894, avg=6198.57, stdev=4102.37, samples=21 00:19:30.373 lat (usec) : 500=0.04%, 750=0.31%, 1000=0.71% 00:19:30.373 lat (msec) : 2=2.75%, 4=14.14%, 10=26.43%, 20=5.20%, 50=47.01% 00:19:30.373 lat (msec) : 100=1.85%, 250=1.57% 00:19:30.373 cpu : usr=99.29%, sys=0.13%, ctx=31, majf=0, minf=5565 00:19:30.373 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:19:30.373 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.373 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:30.373 issued rwts: total=65489,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.373 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.373 00:19:30.373 Run status group 0 (all jobs): 00:19:30.373 READ: bw=23.6MiB/s (24.8MB/s), 11.8MiB/s-11.9MiB/s (12.4MB/s-12.5MB/s), io=512MiB (536MB), run=21483-21644msec 00:19:30.373 WRITE: bw=23.7MiB/s (24.8MB/s), 11.8MiB/s-12.0MiB/s (12.4MB/s-12.6MB/s), io=512MiB (537MB), run=21363-21619msec 00:19:30.373 ----------------------------------------------------- 00:19:30.373 Suppressions used: 00:19:30.373 count bytes template 00:19:30.373 2 10 /usr/src/fio/parse.c 00:19:30.373 3 288 /usr/src/fio/iolog.c 00:19:30.373 1 8 libtcmalloc_minimal.so 00:19:30.373 1 904 libcrypto.so 00:19:30.373 ----------------------------------------------------- 00:19:30.373 00:19:30.373 12:17:29 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:19:30.373 12:17:29 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:30.373 12:17:29 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:30.373 12:17:29 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:19:30.373 12:17:29 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:19:30.373 12:17:29 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:30.373 12:17:29 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:30.373 12:17:29 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:19:30.373 12:17:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:19:30.373 12:17:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:30.373 12:17:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:30.373 12:17:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:30.373 12:17:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:30.373 12:17:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:19:30.373 12:17:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:30.373 12:17:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:30.373 12:17:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:30.373 12:17:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:19:30.373 12:17:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:30.373 12:17:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:30.373 12:17:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:30.373 12:17:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:19:30.373 12:17:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:30.373 12:17:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:19:30.373 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:19:30.373 fio-3.35 00:19:30.373 Starting 1 thread 00:19:42.574 00:19:42.574 test: (groupid=0, jobs=1): err= 0: pid=75932: Mon Nov 25 12:17:42 2024 00:19:42.574 read: IOPS=8483, BW=33.1MiB/s (34.7MB/s)(255MiB/7686msec) 00:19:42.574 slat (nsec): min=3005, max=25079, avg=3501.87, stdev=659.76 00:19:42.574 clat (usec): min=496, max=30947, avg=15080.82, stdev=1999.15 00:19:42.574 lat (usec): min=500, max=30950, avg=15084.32, stdev=1999.17 00:19:42.574 clat percentiles (usec): 00:19:42.574 | 1.00th=[13435], 5.00th=[13566], 10.00th=[13698], 20.00th=[13829], 00:19:42.574 | 30.00th=[13960], 40.00th=[14091], 50.00th=[14746], 60.00th=[15139], 00:19:42.574 | 70.00th=[15401], 80.00th=[15664], 90.00th=[16450], 95.00th=[19268], 00:19:42.574 | 99.00th=[23725], 99.50th=[25035], 99.90th=[28443], 99.95th=[29492], 00:19:42.574 | 99.99th=[30278] 00:19:42.574 write: IOPS=16.7k, BW=65.4MiB/s (68.6MB/s)(256MiB/3913msec); 0 zone resets 00:19:42.574 slat (usec): min=4, max=723, avg= 6.36, stdev= 3.67 00:19:42.574 clat (usec): min=441, max=48363, avg=7600.96, stdev=9619.07 00:19:42.574 lat (usec): min=447, max=48369, avg=7607.32, stdev=9619.03 00:19:42.574 clat percentiles (usec): 00:19:42.574 | 1.00th=[ 619], 5.00th=[ 725], 10.00th=[ 824], 20.00th=[ 955], 00:19:42.574 | 30.00th=[ 1090], 40.00th=[ 1467], 50.00th=[ 4817], 60.00th=[ 5604], 00:19:42.574 | 70.00th=[ 6718], 80.00th=[ 8291], 90.00th=[27657], 95.00th=[30016], 00:19:42.574 | 99.00th=[33162], 99.50th=[35390], 99.90th=[39584], 99.95th=[40109], 00:19:42.574 | 99.99th=[46924] 00:19:42.574 bw ( KiB/s): min=46832, max=90808, per=97.83%, avg=65536.00, stdev=15088.38, samples=8 00:19:42.574 iops : min=11708, max=22702, avg=16384.00, stdev=3772.09, samples=8 00:19:42.574 lat (usec) : 500=0.01%, 750=3.04%, 1000=8.81% 00:19:42.574 lat (msec) : 2=8.78%, 4=1.11%, 10=20.14%, 20=48.24%, 50=9.87% 00:19:42.574 cpu : usr=99.15%, sys=0.17%, ctx=16, majf=0, minf=5565 00:19:42.574 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:19:42.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:42.574 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:42.574 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:42.575 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:42.575 00:19:42.575 Run status group 0 (all jobs): 00:19:42.575 READ: bw=33.1MiB/s (34.7MB/s), 33.1MiB/s-33.1MiB/s (34.7MB/s-34.7MB/s), io=255MiB (267MB), run=7686-7686msec 00:19:42.575 WRITE: bw=65.4MiB/s (68.6MB/s), 65.4MiB/s-65.4MiB/s (68.6MB/s-68.6MB/s), io=256MiB (268MB), run=3913-3913msec 00:19:43.139 ----------------------------------------------------- 00:19:43.139 Suppressions used: 00:19:43.139 count bytes template 00:19:43.139 1 5 /usr/src/fio/parse.c 00:19:43.139 2 192 /usr/src/fio/iolog.c 00:19:43.139 1 8 libtcmalloc_minimal.so 00:19:43.139 1 904 libcrypto.so 00:19:43.139 ----------------------------------------------------- 00:19:43.139 00:19:43.139 12:17:44 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:19:43.139 12:17:44 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:43.139 12:17:44 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:43.139 12:17:44 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:43.139 12:17:44 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:19:43.139 Remove shared memory files 00:19:43.139 12:17:44 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:43.139 12:17:44 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:19:43.139 12:17:44 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:19:43.139 12:17:44 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57170 /dev/shm/spdk_tgt_trace.pid74293 00:19:43.139 12:17:44 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:43.139 12:17:44 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:19:43.139 00:19:43.139 real 1m3.783s 00:19:43.139 user 2m24.569s 00:19:43.139 sys 0m2.595s 00:19:43.139 12:17:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:43.139 12:17:44 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:43.139 ************************************ 00:19:43.139 END TEST ftl_fio_basic 00:19:43.139 ************************************ 00:19:43.139 12:17:44 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:19:43.139 12:17:44 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:43.139 12:17:44 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:43.139 12:17:44 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:43.139 ************************************ 00:19:43.139 START TEST ftl_bdevperf 00:19:43.139 ************************************ 00:19:43.139 12:17:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:19:43.139 * Looking for test storage... 00:19:43.139 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:43.139 12:17:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:43.139 12:17:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:43.139 12:17:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:43.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:43.398 --rc genhtml_branch_coverage=1 00:19:43.398 --rc genhtml_function_coverage=1 00:19:43.398 --rc genhtml_legend=1 00:19:43.398 --rc geninfo_all_blocks=1 00:19:43.398 --rc geninfo_unexecuted_blocks=1 00:19:43.398 00:19:43.398 ' 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:43.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:43.398 --rc genhtml_branch_coverage=1 00:19:43.398 --rc genhtml_function_coverage=1 00:19:43.398 --rc genhtml_legend=1 00:19:43.398 --rc geninfo_all_blocks=1 00:19:43.398 --rc geninfo_unexecuted_blocks=1 00:19:43.398 00:19:43.398 ' 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:43.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:43.398 --rc genhtml_branch_coverage=1 00:19:43.398 --rc genhtml_function_coverage=1 00:19:43.398 --rc genhtml_legend=1 00:19:43.398 --rc geninfo_all_blocks=1 00:19:43.398 --rc geninfo_unexecuted_blocks=1 00:19:43.398 00:19:43.398 ' 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:43.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:43.398 --rc genhtml_branch_coverage=1 00:19:43.398 --rc genhtml_function_coverage=1 00:19:43.398 --rc genhtml_legend=1 00:19:43.398 --rc geninfo_all_blocks=1 00:19:43.398 --rc geninfo_unexecuted_blocks=1 00:19:43.398 00:19:43.398 ' 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=76154 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 76154 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 76154 ']' 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:43.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:43.398 12:17:44 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:43.398 [2024-11-25 12:17:44.336886] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:19:43.399 [2024-11-25 12:17:44.337227] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76154 ] 00:19:43.656 [2024-11-25 12:17:44.493167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:43.656 [2024-11-25 12:17:44.594245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:44.222 12:17:45 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:44.222 12:17:45 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:19:44.222 12:17:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:44.222 12:17:45 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:19:44.222 12:17:45 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:44.222 12:17:45 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:19:44.222 12:17:45 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:19:44.222 12:17:45 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:44.482 12:17:45 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:44.482 12:17:45 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:19:44.482 12:17:45 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:44.482 12:17:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:44.482 12:17:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:44.482 12:17:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:44.482 12:17:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:44.482 12:17:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:44.740 12:17:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:44.740 { 00:19:44.740 "name": "nvme0n1", 00:19:44.740 "aliases": [ 00:19:44.740 "fc6d86b4-0921-458c-859c-c3b3b580a874" 00:19:44.740 ], 00:19:44.740 "product_name": "NVMe disk", 00:19:44.740 "block_size": 4096, 00:19:44.740 "num_blocks": 1310720, 00:19:44.740 "uuid": "fc6d86b4-0921-458c-859c-c3b3b580a874", 00:19:44.740 "numa_id": -1, 00:19:44.740 "assigned_rate_limits": { 00:19:44.740 "rw_ios_per_sec": 0, 00:19:44.740 "rw_mbytes_per_sec": 0, 00:19:44.740 "r_mbytes_per_sec": 0, 00:19:44.740 "w_mbytes_per_sec": 0 00:19:44.740 }, 00:19:44.740 "claimed": true, 00:19:44.740 "claim_type": "read_many_write_one", 00:19:44.740 "zoned": false, 00:19:44.740 "supported_io_types": { 00:19:44.740 "read": true, 00:19:44.740 "write": true, 00:19:44.740 "unmap": true, 00:19:44.740 "flush": true, 00:19:44.740 "reset": true, 00:19:44.740 "nvme_admin": true, 00:19:44.740 "nvme_io": true, 00:19:44.740 "nvme_io_md": false, 00:19:44.740 "write_zeroes": true, 00:19:44.740 "zcopy": false, 00:19:44.740 "get_zone_info": false, 00:19:44.740 "zone_management": false, 00:19:44.740 "zone_append": false, 00:19:44.740 "compare": true, 00:19:44.740 "compare_and_write": false, 00:19:44.740 "abort": true, 00:19:44.740 "seek_hole": false, 00:19:44.740 "seek_data": false, 00:19:44.740 "copy": true, 00:19:44.740 "nvme_iov_md": false 00:19:44.740 }, 00:19:44.740 "driver_specific": { 00:19:44.740 "nvme": [ 00:19:44.740 { 00:19:44.740 "pci_address": "0000:00:11.0", 00:19:44.740 "trid": { 00:19:44.740 "trtype": "PCIe", 00:19:44.740 "traddr": "0000:00:11.0" 00:19:44.740 }, 00:19:44.740 "ctrlr_data": { 00:19:44.740 "cntlid": 0, 00:19:44.740 "vendor_id": "0x1b36", 00:19:44.740 "model_number": "QEMU NVMe Ctrl", 00:19:44.740 "serial_number": "12341", 00:19:44.740 "firmware_revision": "8.0.0", 00:19:44.740 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:44.740 "oacs": { 00:19:44.740 "security": 0, 00:19:44.740 "format": 1, 00:19:44.740 "firmware": 0, 00:19:44.740 "ns_manage": 1 00:19:44.740 }, 00:19:44.740 "multi_ctrlr": false, 00:19:44.740 "ana_reporting": false 00:19:44.740 }, 00:19:44.740 "vs": { 00:19:44.740 "nvme_version": "1.4" 00:19:44.740 }, 00:19:44.740 "ns_data": { 00:19:44.740 "id": 1, 00:19:44.740 "can_share": false 00:19:44.740 } 00:19:44.740 } 00:19:44.740 ], 00:19:44.740 "mp_policy": "active_passive" 00:19:44.740 } 00:19:44.740 } 00:19:44.740 ]' 00:19:44.740 12:17:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:44.740 12:17:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:44.740 12:17:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:44.740 12:17:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:44.740 12:17:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:44.740 12:17:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:19:44.740 12:17:45 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:19:44.740 12:17:45 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:44.740 12:17:45 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:19:44.740 12:17:45 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:44.740 12:17:45 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:44.998 12:17:45 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=0d086759-b8ef-4b30-a42f-8bb0c85dbc53 00:19:44.998 12:17:45 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:19:44.999 12:17:45 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0d086759-b8ef-4b30-a42f-8bb0c85dbc53 00:19:45.257 12:17:46 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:45.257 12:17:46 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=f88924a3-f39e-4b0f-888c-5ef9d28838a6 00:19:45.257 12:17:46 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u f88924a3-f39e-4b0f-888c-5ef9d28838a6 00:19:45.515 12:17:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=ba7b514d-d03c-4a3f-9867-04a26ea52cf3 00:19:45.515 12:17:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 ba7b514d-d03c-4a3f-9867-04a26ea52cf3 00:19:45.515 12:17:46 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:19:45.515 12:17:46 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:45.515 12:17:46 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=ba7b514d-d03c-4a3f-9867-04a26ea52cf3 00:19:45.515 12:17:46 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:19:45.515 12:17:46 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size ba7b514d-d03c-4a3f-9867-04a26ea52cf3 00:19:45.515 12:17:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=ba7b514d-d03c-4a3f-9867-04a26ea52cf3 00:19:45.515 12:17:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:45.515 12:17:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:45.515 12:17:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:45.515 12:17:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ba7b514d-d03c-4a3f-9867-04a26ea52cf3 00:19:45.776 12:17:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:45.776 { 00:19:45.776 "name": "ba7b514d-d03c-4a3f-9867-04a26ea52cf3", 00:19:45.776 "aliases": [ 00:19:45.776 "lvs/nvme0n1p0" 00:19:45.776 ], 00:19:45.776 "product_name": "Logical Volume", 00:19:45.776 "block_size": 4096, 00:19:45.776 "num_blocks": 26476544, 00:19:45.776 "uuid": "ba7b514d-d03c-4a3f-9867-04a26ea52cf3", 00:19:45.776 "assigned_rate_limits": { 00:19:45.776 "rw_ios_per_sec": 0, 00:19:45.776 "rw_mbytes_per_sec": 0, 00:19:45.776 "r_mbytes_per_sec": 0, 00:19:45.776 "w_mbytes_per_sec": 0 00:19:45.776 }, 00:19:45.776 "claimed": false, 00:19:45.776 "zoned": false, 00:19:45.776 "supported_io_types": { 00:19:45.776 "read": true, 00:19:45.776 "write": true, 00:19:45.776 "unmap": true, 00:19:45.776 "flush": false, 00:19:45.776 "reset": true, 00:19:45.776 "nvme_admin": false, 00:19:45.776 "nvme_io": false, 00:19:45.776 "nvme_io_md": false, 00:19:45.776 "write_zeroes": true, 00:19:45.776 "zcopy": false, 00:19:45.776 "get_zone_info": false, 00:19:45.776 "zone_management": false, 00:19:45.776 "zone_append": false, 00:19:45.776 "compare": false, 00:19:45.776 "compare_and_write": false, 00:19:45.776 "abort": false, 00:19:45.776 "seek_hole": true, 00:19:45.776 "seek_data": true, 00:19:45.776 "copy": false, 00:19:45.776 "nvme_iov_md": false 00:19:45.776 }, 00:19:45.776 "driver_specific": { 00:19:45.776 "lvol": { 00:19:45.776 "lvol_store_uuid": "f88924a3-f39e-4b0f-888c-5ef9d28838a6", 00:19:45.776 "base_bdev": "nvme0n1", 00:19:45.776 "thin_provision": true, 00:19:45.776 "num_allocated_clusters": 0, 00:19:45.776 "snapshot": false, 00:19:45.776 "clone": false, 00:19:45.776 "esnap_clone": false 00:19:45.776 } 00:19:45.776 } 00:19:45.776 } 00:19:45.777 ]' 00:19:45.777 12:17:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:45.777 12:17:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:45.777 12:17:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:45.777 12:17:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:45.777 12:17:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:45.777 12:17:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:45.777 12:17:46 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:19:45.777 12:17:46 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:19:45.777 12:17:46 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:46.050 12:17:46 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:46.050 12:17:46 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:46.051 12:17:46 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size ba7b514d-d03c-4a3f-9867-04a26ea52cf3 00:19:46.051 12:17:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=ba7b514d-d03c-4a3f-9867-04a26ea52cf3 00:19:46.051 12:17:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:46.051 12:17:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:46.051 12:17:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:46.051 12:17:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ba7b514d-d03c-4a3f-9867-04a26ea52cf3 00:19:46.051 12:17:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:46.051 { 00:19:46.051 "name": "ba7b514d-d03c-4a3f-9867-04a26ea52cf3", 00:19:46.051 "aliases": [ 00:19:46.051 "lvs/nvme0n1p0" 00:19:46.051 ], 00:19:46.051 "product_name": "Logical Volume", 00:19:46.051 "block_size": 4096, 00:19:46.051 "num_blocks": 26476544, 00:19:46.051 "uuid": "ba7b514d-d03c-4a3f-9867-04a26ea52cf3", 00:19:46.051 "assigned_rate_limits": { 00:19:46.051 "rw_ios_per_sec": 0, 00:19:46.051 "rw_mbytes_per_sec": 0, 00:19:46.051 "r_mbytes_per_sec": 0, 00:19:46.051 "w_mbytes_per_sec": 0 00:19:46.051 }, 00:19:46.051 "claimed": false, 00:19:46.051 "zoned": false, 00:19:46.051 "supported_io_types": { 00:19:46.051 "read": true, 00:19:46.051 "write": true, 00:19:46.051 "unmap": true, 00:19:46.051 "flush": false, 00:19:46.051 "reset": true, 00:19:46.051 "nvme_admin": false, 00:19:46.051 "nvme_io": false, 00:19:46.051 "nvme_io_md": false, 00:19:46.051 "write_zeroes": true, 00:19:46.051 "zcopy": false, 00:19:46.051 "get_zone_info": false, 00:19:46.051 "zone_management": false, 00:19:46.051 "zone_append": false, 00:19:46.051 "compare": false, 00:19:46.051 "compare_and_write": false, 00:19:46.051 "abort": false, 00:19:46.051 "seek_hole": true, 00:19:46.051 "seek_data": true, 00:19:46.051 "copy": false, 00:19:46.051 "nvme_iov_md": false 00:19:46.051 }, 00:19:46.051 "driver_specific": { 00:19:46.051 "lvol": { 00:19:46.051 "lvol_store_uuid": "f88924a3-f39e-4b0f-888c-5ef9d28838a6", 00:19:46.051 "base_bdev": "nvme0n1", 00:19:46.051 "thin_provision": true, 00:19:46.051 "num_allocated_clusters": 0, 00:19:46.051 "snapshot": false, 00:19:46.051 "clone": false, 00:19:46.051 "esnap_clone": false 00:19:46.051 } 00:19:46.051 } 00:19:46.051 } 00:19:46.051 ]' 00:19:46.051 12:17:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:46.309 12:17:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:46.309 12:17:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:46.309 12:17:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:46.309 12:17:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:46.309 12:17:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:46.309 12:17:47 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:19:46.309 12:17:47 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:46.566 12:17:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:19:46.566 12:17:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size ba7b514d-d03c-4a3f-9867-04a26ea52cf3 00:19:46.566 12:17:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=ba7b514d-d03c-4a3f-9867-04a26ea52cf3 00:19:46.566 12:17:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:46.566 12:17:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:46.566 12:17:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:46.566 12:17:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ba7b514d-d03c-4a3f-9867-04a26ea52cf3 00:19:46.566 12:17:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:46.566 { 00:19:46.566 "name": "ba7b514d-d03c-4a3f-9867-04a26ea52cf3", 00:19:46.566 "aliases": [ 00:19:46.566 "lvs/nvme0n1p0" 00:19:46.566 ], 00:19:46.566 "product_name": "Logical Volume", 00:19:46.566 "block_size": 4096, 00:19:46.566 "num_blocks": 26476544, 00:19:46.566 "uuid": "ba7b514d-d03c-4a3f-9867-04a26ea52cf3", 00:19:46.566 "assigned_rate_limits": { 00:19:46.566 "rw_ios_per_sec": 0, 00:19:46.566 "rw_mbytes_per_sec": 0, 00:19:46.566 "r_mbytes_per_sec": 0, 00:19:46.566 "w_mbytes_per_sec": 0 00:19:46.566 }, 00:19:46.566 "claimed": false, 00:19:46.566 "zoned": false, 00:19:46.566 "supported_io_types": { 00:19:46.566 "read": true, 00:19:46.566 "write": true, 00:19:46.566 "unmap": true, 00:19:46.566 "flush": false, 00:19:46.566 "reset": true, 00:19:46.566 "nvme_admin": false, 00:19:46.567 "nvme_io": false, 00:19:46.567 "nvme_io_md": false, 00:19:46.567 "write_zeroes": true, 00:19:46.567 "zcopy": false, 00:19:46.567 "get_zone_info": false, 00:19:46.567 "zone_management": false, 00:19:46.567 "zone_append": false, 00:19:46.567 "compare": false, 00:19:46.567 "compare_and_write": false, 00:19:46.567 "abort": false, 00:19:46.567 "seek_hole": true, 00:19:46.567 "seek_data": true, 00:19:46.567 "copy": false, 00:19:46.567 "nvme_iov_md": false 00:19:46.567 }, 00:19:46.567 "driver_specific": { 00:19:46.567 "lvol": { 00:19:46.567 "lvol_store_uuid": "f88924a3-f39e-4b0f-888c-5ef9d28838a6", 00:19:46.567 "base_bdev": "nvme0n1", 00:19:46.567 "thin_provision": true, 00:19:46.567 "num_allocated_clusters": 0, 00:19:46.567 "snapshot": false, 00:19:46.567 "clone": false, 00:19:46.567 "esnap_clone": false 00:19:46.567 } 00:19:46.567 } 00:19:46.567 } 00:19:46.567 ]' 00:19:46.567 12:17:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:46.567 12:17:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:46.567 12:17:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:46.826 12:17:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:46.826 12:17:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:46.826 12:17:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:46.826 12:17:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:19:46.826 12:17:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d ba7b514d-d03c-4a3f-9867-04a26ea52cf3 -c nvc0n1p0 --l2p_dram_limit 20 00:19:46.826 [2024-11-25 12:17:47.846029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.826 [2024-11-25 12:17:47.846235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:46.826 [2024-11-25 12:17:47.846256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:46.826 [2024-11-25 12:17:47.846266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.826 [2024-11-25 12:17:47.846325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.826 [2024-11-25 12:17:47.846339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:46.826 [2024-11-25 12:17:47.846347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:19:46.826 [2024-11-25 12:17:47.846356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.826 [2024-11-25 12:17:47.846384] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:46.826 [2024-11-25 12:17:47.847116] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:46.826 [2024-11-25 12:17:47.847145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.826 [2024-11-25 12:17:47.847155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:46.826 [2024-11-25 12:17:47.847164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.766 ms 00:19:46.826 [2024-11-25 12:17:47.847174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.826 [2024-11-25 12:17:47.847254] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID ca59695f-3901-438d-92d8-9f2a604383ee 00:19:46.826 [2024-11-25 12:17:47.848335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.826 [2024-11-25 12:17:47.848367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:46.826 [2024-11-25 12:17:47.848379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:19:46.826 [2024-11-25 12:17:47.848389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.826 [2024-11-25 12:17:47.853767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.826 [2024-11-25 12:17:47.853904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:46.826 [2024-11-25 12:17:47.853923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.341 ms 00:19:46.826 [2024-11-25 12:17:47.853931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.826 [2024-11-25 12:17:47.854039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.826 [2024-11-25 12:17:47.854050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:46.826 [2024-11-25 12:17:47.854063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:19:46.826 [2024-11-25 12:17:47.854070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.826 [2024-11-25 12:17:47.854113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.826 [2024-11-25 12:17:47.854122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:46.826 [2024-11-25 12:17:47.854132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:46.826 [2024-11-25 12:17:47.854139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.826 [2024-11-25 12:17:47.854160] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:46.826 [2024-11-25 12:17:47.857868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.826 [2024-11-25 12:17:47.857977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:46.826 [2024-11-25 12:17:47.858032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.715 ms 00:19:46.826 [2024-11-25 12:17:47.858062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.826 [2024-11-25 12:17:47.858111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.826 [2024-11-25 12:17:47.858177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:46.826 [2024-11-25 12:17:47.858201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:46.826 [2024-11-25 12:17:47.858223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.826 [2024-11-25 12:17:47.858305] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:46.826 [2024-11-25 12:17:47.858469] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:46.826 [2024-11-25 12:17:47.858541] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:46.826 [2024-11-25 12:17:47.858578] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:46.826 [2024-11-25 12:17:47.858609] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:46.826 [2024-11-25 12:17:47.858645] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:46.826 [2024-11-25 12:17:47.858795] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:46.826 [2024-11-25 12:17:47.858819] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:46.827 [2024-11-25 12:17:47.858837] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:46.827 [2024-11-25 12:17:47.858859] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:46.827 [2024-11-25 12:17:47.858879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.827 [2024-11-25 12:17:47.858941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:46.827 [2024-11-25 12:17:47.858976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.575 ms 00:19:46.827 [2024-11-25 12:17:47.858998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.827 [2024-11-25 12:17:47.859094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.827 [2024-11-25 12:17:47.859123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:46.827 [2024-11-25 12:17:47.859144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:19:46.827 [2024-11-25 12:17:47.859166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.827 [2024-11-25 12:17:47.859295] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:46.827 [2024-11-25 12:17:47.859350] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:46.827 [2024-11-25 12:17:47.859375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:46.827 [2024-11-25 12:17:47.859416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:46.827 [2024-11-25 12:17:47.859484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:46.827 [2024-11-25 12:17:47.859508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:46.827 [2024-11-25 12:17:47.859549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:46.827 [2024-11-25 12:17:47.859574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:46.827 [2024-11-25 12:17:47.859593] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:46.827 [2024-11-25 12:17:47.859641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:46.827 [2024-11-25 12:17:47.859662] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:46.827 [2024-11-25 12:17:47.859682] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:46.827 [2024-11-25 12:17:47.859723] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:46.827 [2024-11-25 12:17:47.859753] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:46.827 [2024-11-25 12:17:47.859771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:46.827 [2024-11-25 12:17:47.859794] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:46.827 [2024-11-25 12:17:47.859845] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:46.827 [2024-11-25 12:17:47.859869] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:46.827 [2024-11-25 12:17:47.859888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:46.827 [2024-11-25 12:17:47.859910] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:46.827 [2024-11-25 12:17:47.859928] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:46.827 [2024-11-25 12:17:47.859986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:46.827 [2024-11-25 12:17:47.860009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:46.827 [2024-11-25 12:17:47.860059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:46.827 [2024-11-25 12:17:47.860101] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:46.827 [2024-11-25 12:17:47.860124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:46.827 [2024-11-25 12:17:47.860143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:46.827 [2024-11-25 12:17:47.860162] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:46.827 [2024-11-25 12:17:47.860180] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:46.827 [2024-11-25 12:17:47.860276] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:46.827 [2024-11-25 12:17:47.860299] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:46.827 [2024-11-25 12:17:47.860321] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:46.827 [2024-11-25 12:17:47.860376] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:46.827 [2024-11-25 12:17:47.860400] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:46.827 [2024-11-25 12:17:47.860419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:46.827 [2024-11-25 12:17:47.860459] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:46.827 [2024-11-25 12:17:47.860480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:46.827 [2024-11-25 12:17:47.860500] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:46.827 [2024-11-25 12:17:47.860584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:46.827 [2024-11-25 12:17:47.860608] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:46.827 [2024-11-25 12:17:47.860627] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:46.827 [2024-11-25 12:17:47.860646] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:46.827 [2024-11-25 12:17:47.860745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:46.827 [2024-11-25 12:17:47.860769] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:46.827 [2024-11-25 12:17:47.860788] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:46.827 [2024-11-25 12:17:47.860809] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:46.827 [2024-11-25 12:17:47.860854] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:46.827 [2024-11-25 12:17:47.860886] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:46.827 [2024-11-25 12:17:47.860906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:46.827 [2024-11-25 12:17:47.860925] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:46.827 [2024-11-25 12:17:47.860975] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:46.827 [2024-11-25 12:17:47.861046] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:46.827 [2024-11-25 12:17:47.861093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:46.827 [2024-11-25 12:17:47.861121] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:46.827 [2024-11-25 12:17:47.861184] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:46.827 [2024-11-25 12:17:47.861220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:46.827 [2024-11-25 12:17:47.861276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:46.827 [2024-11-25 12:17:47.861309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:46.827 [2024-11-25 12:17:47.861429] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:46.827 [2024-11-25 12:17:47.861462] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:46.827 [2024-11-25 12:17:47.861521] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:46.827 [2024-11-25 12:17:47.861555] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:46.827 [2024-11-25 12:17:47.861584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:46.827 [2024-11-25 12:17:47.861643] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:46.827 [2024-11-25 12:17:47.861716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:46.827 [2024-11-25 12:17:47.861772] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:46.827 [2024-11-25 12:17:47.861804] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:46.827 [2024-11-25 12:17:47.861909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:46.827 [2024-11-25 12:17:47.861941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:46.827 [2024-11-25 12:17:47.861984] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:46.827 [2024-11-25 12:17:47.862065] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:46.827 [2024-11-25 12:17:47.862126] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:46.827 [2024-11-25 12:17:47.862158] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:46.827 [2024-11-25 12:17:47.862189] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:46.827 [2024-11-25 12:17:47.862279] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:46.827 [2024-11-25 12:17:47.862313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.827 [2024-11-25 12:17:47.862335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:46.827 [2024-11-25 12:17:47.862384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.081 ms 00:19:46.827 [2024-11-25 12:17:47.862407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.827 [2024-11-25 12:17:47.862472] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:46.827 [2024-11-25 12:17:47.862543] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:48.727 [2024-11-25 12:17:49.717388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.727 [2024-11-25 12:17:49.717577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:48.727 [2024-11-25 12:17:49.717605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1854.906 ms 00:19:48.727 [2024-11-25 12:17:49.717614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.727 [2024-11-25 12:17:49.742898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.727 [2024-11-25 12:17:49.742944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:48.727 [2024-11-25 12:17:49.742970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.082 ms 00:19:48.727 [2024-11-25 12:17:49.742978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.727 [2024-11-25 12:17:49.743128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.727 [2024-11-25 12:17:49.743139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:48.727 [2024-11-25 12:17:49.743152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:19:48.727 [2024-11-25 12:17:49.743159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.727 [2024-11-25 12:17:49.783474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.727 [2024-11-25 12:17:49.783676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:48.727 [2024-11-25 12:17:49.783702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.277 ms 00:19:48.727 [2024-11-25 12:17:49.783711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.727 [2024-11-25 12:17:49.783758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.727 [2024-11-25 12:17:49.783770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:48.727 [2024-11-25 12:17:49.783780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:48.727 [2024-11-25 12:17:49.783787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.727 [2024-11-25 12:17:49.784173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.727 [2024-11-25 12:17:49.784191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:48.727 [2024-11-25 12:17:49.784202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:19:48.727 [2024-11-25 12:17:49.784209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.727 [2024-11-25 12:17:49.784333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.727 [2024-11-25 12:17:49.784342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:48.727 [2024-11-25 12:17:49.784354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:19:48.727 [2024-11-25 12:17:49.784361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.727 [2024-11-25 12:17:49.797228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.727 [2024-11-25 12:17:49.797262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:48.727 [2024-11-25 12:17:49.797274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.849 ms 00:19:48.727 [2024-11-25 12:17:49.797282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.985 [2024-11-25 12:17:49.808609] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:19:48.985 [2024-11-25 12:17:49.813731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.985 [2024-11-25 12:17:49.813766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:48.985 [2024-11-25 12:17:49.813778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.359 ms 00:19:48.985 [2024-11-25 12:17:49.813789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.985 [2024-11-25 12:17:49.872955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.985 [2024-11-25 12:17:49.873150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:48.985 [2024-11-25 12:17:49.873169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.137 ms 00:19:48.985 [2024-11-25 12:17:49.873179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.985 [2024-11-25 12:17:49.873369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.985 [2024-11-25 12:17:49.873385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:48.985 [2024-11-25 12:17:49.873393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.154 ms 00:19:48.985 [2024-11-25 12:17:49.873402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.985 [2024-11-25 12:17:49.896313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.986 [2024-11-25 12:17:49.896467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:48.986 [2024-11-25 12:17:49.896485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.877 ms 00:19:48.986 [2024-11-25 12:17:49.896495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.986 [2024-11-25 12:17:49.918777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.986 [2024-11-25 12:17:49.918818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:48.986 [2024-11-25 12:17:49.918830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.251 ms 00:19:48.986 [2024-11-25 12:17:49.918840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.986 [2024-11-25 12:17:49.919407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.986 [2024-11-25 12:17:49.919428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:48.986 [2024-11-25 12:17:49.919438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.535 ms 00:19:48.986 [2024-11-25 12:17:49.919447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.986 [2024-11-25 12:17:49.982589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.986 [2024-11-25 12:17:49.982767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:48.986 [2024-11-25 12:17:49.982785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.096 ms 00:19:48.986 [2024-11-25 12:17:49.982794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.986 [2024-11-25 12:17:50.007415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.986 [2024-11-25 12:17:50.007472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:48.986 [2024-11-25 12:17:50.007485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.550 ms 00:19:48.986 [2024-11-25 12:17:50.007497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.986 [2024-11-25 12:17:50.030981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.986 [2024-11-25 12:17:50.031029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:48.986 [2024-11-25 12:17:50.031041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.442 ms 00:19:48.986 [2024-11-25 12:17:50.031050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.986 [2024-11-25 12:17:50.054250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.986 [2024-11-25 12:17:50.054307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:48.986 [2024-11-25 12:17:50.054319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.163 ms 00:19:48.986 [2024-11-25 12:17:50.054327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.986 [2024-11-25 12:17:50.054366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.986 [2024-11-25 12:17:50.054379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:48.986 [2024-11-25 12:17:50.054387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:48.986 [2024-11-25 12:17:50.054396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.986 [2024-11-25 12:17:50.054474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.986 [2024-11-25 12:17:50.054485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:48.986 [2024-11-25 12:17:50.054493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:19:48.986 [2024-11-25 12:17:50.054502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.986 [2024-11-25 12:17:50.055342] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2208.903 ms, result 0 00:19:48.986 { 00:19:48.986 "name": "ftl0", 00:19:48.986 "uuid": "ca59695f-3901-438d-92d8-9f2a604383ee" 00:19:48.986 } 00:19:49.243 12:17:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:19:49.243 12:17:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:19:49.243 12:17:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:19:49.243 12:17:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:19:49.500 [2024-11-25 12:17:50.379675] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:49.500 I/O size of 69632 is greater than zero copy threshold (65536). 00:19:49.500 Zero copy mechanism will not be used. 00:19:49.500 Running I/O for 4 seconds... 00:19:51.367 3302.00 IOPS, 219.27 MiB/s [2024-11-25T12:17:53.818Z] 3256.50 IOPS, 216.25 MiB/s [2024-11-25T12:17:54.511Z] 3265.33 IOPS, 216.84 MiB/s [2024-11-25T12:17:54.511Z] 3259.25 IOPS, 216.43 MiB/s 00:19:53.431 Latency(us) 00:19:53.431 [2024-11-25T12:17:54.511Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:53.431 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:19:53.431 ftl0 : 4.00 3258.00 216.35 0.00 0.00 322.53 163.84 2155.13 00:19:53.431 [2024-11-25T12:17:54.511Z] =================================================================================================================== 00:19:53.431 [2024-11-25T12:17:54.511Z] Total : 3258.00 216.35 0.00 0.00 322.53 163.84 2155.13 00:19:53.431 [2024-11-25 12:17:54.389619] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:53.431 { 00:19:53.431 "results": [ 00:19:53.431 { 00:19:53.431 "job": "ftl0", 00:19:53.431 "core_mask": "0x1", 00:19:53.431 "workload": "randwrite", 00:19:53.431 "status": "finished", 00:19:53.431 "queue_depth": 1, 00:19:53.431 "io_size": 69632, 00:19:53.431 "runtime": 4.001836, 00:19:53.431 "iops": 3258.004575899662, 00:19:53.431 "mibps": 216.35186636833694, 00:19:53.431 "io_failed": 0, 00:19:53.431 "io_timeout": 0, 00:19:53.431 "avg_latency_us": 322.5283122706408, 00:19:53.431 "min_latency_us": 163.84, 00:19:53.431 "max_latency_us": 2155.126153846154 00:19:53.431 } 00:19:53.431 ], 00:19:53.431 "core_count": 1 00:19:53.431 } 00:19:53.431 12:17:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:19:53.431 [2024-11-25 12:17:54.496999] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:53.431 Running I/O for 4 seconds... 00:19:55.734 11381.00 IOPS, 44.46 MiB/s [2024-11-25T12:17:57.747Z] 11288.00 IOPS, 44.09 MiB/s [2024-11-25T12:17:58.677Z] 10992.00 IOPS, 42.94 MiB/s [2024-11-25T12:17:58.677Z] 10887.25 IOPS, 42.53 MiB/s 00:19:57.597 Latency(us) 00:19:57.597 [2024-11-25T12:17:58.677Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:57.598 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:19:57.598 ftl0 : 4.02 10878.12 42.49 0.00 0.00 11743.96 234.73 29642.44 00:19:57.598 [2024-11-25T12:17:58.678Z] =================================================================================================================== 00:19:57.598 [2024-11-25T12:17:58.678Z] Total : 10878.12 42.49 0.00 0.00 11743.96 0.00 29642.44 00:19:57.598 [2024-11-25 12:17:58.520667] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:57.598 { 00:19:57.598 "results": [ 00:19:57.598 { 00:19:57.598 "job": "ftl0", 00:19:57.598 "core_mask": "0x1", 00:19:57.598 "workload": "randwrite", 00:19:57.598 "status": "finished", 00:19:57.598 "queue_depth": 128, 00:19:57.598 "io_size": 4096, 00:19:57.598 "runtime": 4.015124, 00:19:57.598 "iops": 10878.119828926827, 00:19:57.598 "mibps": 42.49265558174542, 00:19:57.598 "io_failed": 0, 00:19:57.598 "io_timeout": 0, 00:19:57.598 "avg_latency_us": 11743.9648530383, 00:19:57.598 "min_latency_us": 234.7323076923077, 00:19:57.598 "max_latency_us": 29642.436923076923 00:19:57.598 } 00:19:57.598 ], 00:19:57.598 "core_count": 1 00:19:57.598 } 00:19:57.598 12:17:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:19:57.598 [2024-11-25 12:17:58.634858] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:57.598 Running I/O for 4 seconds... 00:19:59.903 8833.00 IOPS, 34.50 MiB/s [2024-11-25T12:18:01.915Z] 8914.50 IOPS, 34.82 MiB/s [2024-11-25T12:18:02.848Z] 8852.00 IOPS, 34.58 MiB/s [2024-11-25T12:18:02.848Z] 9066.00 IOPS, 35.41 MiB/s 00:20:01.768 Latency(us) 00:20:01.768 [2024-11-25T12:18:02.848Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:01.768 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:01.768 Verification LBA range: start 0x0 length 0x1400000 00:20:01.768 ftl0 : 4.01 9070.36 35.43 0.00 0.00 14061.10 225.28 24197.91 00:20:01.768 [2024-11-25T12:18:02.848Z] =================================================================================================================== 00:20:01.768 [2024-11-25T12:18:02.848Z] Total : 9070.36 35.43 0.00 0.00 14061.10 0.00 24197.91 00:20:01.768 [2024-11-25 12:18:02.661827] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ft{ 00:20:01.768 "results": [ 00:20:01.768 { 00:20:01.768 "job": "ftl0", 00:20:01.768 "core_mask": "0x1", 00:20:01.768 "workload": "verify", 00:20:01.768 "status": "finished", 00:20:01.768 "verify_range": { 00:20:01.768 "start": 0, 00:20:01.768 "length": 20971520 00:20:01.768 }, 00:20:01.768 "queue_depth": 128, 00:20:01.768 "io_size": 4096, 00:20:01.768 "runtime": 4.012081, 00:20:01.768 "iops": 9070.355259527412, 00:20:01.768 "mibps": 35.431075232528954, 00:20:01.768 "io_failed": 0, 00:20:01.768 "io_timeout": 0, 00:20:01.768 "avg_latency_us": 14061.097375978423, 00:20:01.768 "min_latency_us": 225.28, 00:20:01.768 "max_latency_us": 24197.907692307694 00:20:01.768 } 00:20:01.768 ], 00:20:01.768 "core_count": 1 00:20:01.768 } 00:20:01.768 l0 00:20:01.768 12:18:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:20:02.026 [2024-11-25 12:18:02.863796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.026 [2024-11-25 12:18:02.863854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:02.026 [2024-11-25 12:18:02.863869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:02.026 [2024-11-25 12:18:02.863878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.026 [2024-11-25 12:18:02.863899] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:02.026 [2024-11-25 12:18:02.866518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.026 [2024-11-25 12:18:02.866553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:02.026 [2024-11-25 12:18:02.866567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.600 ms 00:20:02.026 [2024-11-25 12:18:02.866574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.026 [2024-11-25 12:18:02.868267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.026 [2024-11-25 12:18:02.868301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:02.026 [2024-11-25 12:18:02.868313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.660 ms 00:20:02.026 [2024-11-25 12:18:02.868320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.026 [2024-11-25 12:18:03.002967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.026 [2024-11-25 12:18:03.003026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:02.026 [2024-11-25 12:18:03.003044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 134.608 ms 00:20:02.026 [2024-11-25 12:18:03.003052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.026 [2024-11-25 12:18:03.009216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.026 [2024-11-25 12:18:03.009478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:02.027 [2024-11-25 12:18:03.009499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.125 ms 00:20:02.027 [2024-11-25 12:18:03.009507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.027 [2024-11-25 12:18:03.033754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.027 [2024-11-25 12:18:03.033805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:02.027 [2024-11-25 12:18:03.033818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.183 ms 00:20:02.027 [2024-11-25 12:18:03.033826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.027 [2024-11-25 12:18:03.048886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.027 [2024-11-25 12:18:03.049105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:02.027 [2024-11-25 12:18:03.049130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.008 ms 00:20:02.027 [2024-11-25 12:18:03.049138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.027 [2024-11-25 12:18:03.049292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.027 [2024-11-25 12:18:03.049303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:02.027 [2024-11-25 12:18:03.049316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:20:02.027 [2024-11-25 12:18:03.049323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.027 [2024-11-25 12:18:03.073031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.027 [2024-11-25 12:18:03.073082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:02.027 [2024-11-25 12:18:03.073096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.678 ms 00:20:02.027 [2024-11-25 12:18:03.073103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.027 [2024-11-25 12:18:03.096031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.027 [2024-11-25 12:18:03.096236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:02.027 [2024-11-25 12:18:03.096258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.872 ms 00:20:02.027 [2024-11-25 12:18:03.096266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.286 [2024-11-25 12:18:03.118667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.286 [2024-11-25 12:18:03.118714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:02.286 [2024-11-25 12:18:03.118728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.354 ms 00:20:02.286 [2024-11-25 12:18:03.118735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.286 [2024-11-25 12:18:03.140654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.286 [2024-11-25 12:18:03.140694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:02.286 [2024-11-25 12:18:03.140710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.828 ms 00:20:02.286 [2024-11-25 12:18:03.140717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.286 [2024-11-25 12:18:03.140752] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:02.286 [2024-11-25 12:18:03.140766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:02.286 [2024-11-25 12:18:03.140777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:02.286 [2024-11-25 12:18:03.140785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:02.286 [2024-11-25 12:18:03.140794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:02.286 [2024-11-25 12:18:03.140802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:02.286 [2024-11-25 12:18:03.140811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:02.286 [2024-11-25 12:18:03.140819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:02.286 [2024-11-25 12:18:03.140828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:02.286 [2024-11-25 12:18:03.140835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:02.286 [2024-11-25 12:18:03.140844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:02.286 [2024-11-25 12:18:03.140852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:02.286 [2024-11-25 12:18:03.140861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:02.286 [2024-11-25 12:18:03.140868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:02.286 [2024-11-25 12:18:03.140879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:02.286 [2024-11-25 12:18:03.140886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:02.286 [2024-11-25 12:18:03.140895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:02.286 [2024-11-25 12:18:03.140902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.140911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.140918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.140929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.140936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.140960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.140969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.140978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.140985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.140994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.141003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.141011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.141019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.141032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.141040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.141049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.141057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.141066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.141079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.141088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.141095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.141104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.141111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.141120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.141135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.141144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.141151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.141161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.141168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.141180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.141187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.141196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.141203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.141212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.141220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.141228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.141236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.141245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.141253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.141261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.141268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.141289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.141297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.141305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.141313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.141324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.141346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.141355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.141362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.141372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.141380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.141389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.141396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.141405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.141413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.141423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.141431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.141440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.141448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.141457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.141464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.141474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.141481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.141490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.141497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.141506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.141514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.141522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.141530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.141538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.141546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.141554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.141561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.141570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.141577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.141586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.141593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.141604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.141611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.141621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.141628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.141638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.141645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.141654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:02.287 [2024-11-25 12:18:03.141670] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:02.287 [2024-11-25 12:18:03.141679] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ca59695f-3901-438d-92d8-9f2a604383ee 00:20:02.287 [2024-11-25 12:18:03.141687] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:02.287 [2024-11-25 12:18:03.141696] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:02.287 [2024-11-25 12:18:03.141705] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:02.287 [2024-11-25 12:18:03.141714] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:02.287 [2024-11-25 12:18:03.141721] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:02.288 [2024-11-25 12:18:03.141730] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:02.288 [2024-11-25 12:18:03.141737] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:02.288 [2024-11-25 12:18:03.141746] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:02.288 [2024-11-25 12:18:03.141752] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:02.288 [2024-11-25 12:18:03.141760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.288 [2024-11-25 12:18:03.141768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:02.288 [2024-11-25 12:18:03.141777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.009 ms 00:20:02.288 [2024-11-25 12:18:03.141784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.288 [2024-11-25 12:18:03.154383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.288 [2024-11-25 12:18:03.154419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:02.288 [2024-11-25 12:18:03.154431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.549 ms 00:20:02.288 [2024-11-25 12:18:03.154439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.288 [2024-11-25 12:18:03.154771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.288 [2024-11-25 12:18:03.154788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:02.288 [2024-11-25 12:18:03.154798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:20:02.288 [2024-11-25 12:18:03.154806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.288 [2024-11-25 12:18:03.189316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:02.288 [2024-11-25 12:18:03.189371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:02.288 [2024-11-25 12:18:03.189387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:02.288 [2024-11-25 12:18:03.189395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.288 [2024-11-25 12:18:03.189460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:02.288 [2024-11-25 12:18:03.189468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:02.288 [2024-11-25 12:18:03.189477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:02.288 [2024-11-25 12:18:03.189485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.288 [2024-11-25 12:18:03.189557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:02.288 [2024-11-25 12:18:03.189569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:02.288 [2024-11-25 12:18:03.189578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:02.288 [2024-11-25 12:18:03.189585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.288 [2024-11-25 12:18:03.189601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:02.288 [2024-11-25 12:18:03.189609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:02.288 [2024-11-25 12:18:03.189617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:02.288 [2024-11-25 12:18:03.189625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.288 [2024-11-25 12:18:03.266915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:02.288 [2024-11-25 12:18:03.266989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:02.288 [2024-11-25 12:18:03.267004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:02.288 [2024-11-25 12:18:03.267013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.288 [2024-11-25 12:18:03.330179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:02.288 [2024-11-25 12:18:03.330230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:02.288 [2024-11-25 12:18:03.330243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:02.288 [2024-11-25 12:18:03.330251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.288 [2024-11-25 12:18:03.330334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:02.288 [2024-11-25 12:18:03.330344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:02.288 [2024-11-25 12:18:03.330357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:02.288 [2024-11-25 12:18:03.330364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.288 [2024-11-25 12:18:03.330407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:02.288 [2024-11-25 12:18:03.330416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:02.288 [2024-11-25 12:18:03.330425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:02.288 [2024-11-25 12:18:03.330432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.288 [2024-11-25 12:18:03.330517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:02.288 [2024-11-25 12:18:03.330526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:02.288 [2024-11-25 12:18:03.330540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:02.288 [2024-11-25 12:18:03.330548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.288 [2024-11-25 12:18:03.330576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:02.288 [2024-11-25 12:18:03.330584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:02.288 [2024-11-25 12:18:03.330594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:02.288 [2024-11-25 12:18:03.330601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.288 [2024-11-25 12:18:03.330634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:02.288 [2024-11-25 12:18:03.330642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:02.288 [2024-11-25 12:18:03.330651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:02.288 [2024-11-25 12:18:03.330660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.288 [2024-11-25 12:18:03.330699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:02.288 [2024-11-25 12:18:03.330715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:02.288 [2024-11-25 12:18:03.330724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:02.288 [2024-11-25 12:18:03.330732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.288 [2024-11-25 12:18:03.330844] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 467.013 ms, result 0 00:20:02.288 true 00:20:02.288 12:18:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 76154 00:20:02.288 12:18:03 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 76154 ']' 00:20:02.288 12:18:03 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 76154 00:20:02.288 12:18:03 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:20:02.288 12:18:03 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:02.288 12:18:03 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76154 00:20:02.593 killing process with pid 76154 00:20:02.593 Received shutdown signal, test time was about 4.000000 seconds 00:20:02.593 00:20:02.593 Latency(us) 00:20:02.593 [2024-11-25T12:18:03.673Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:02.593 [2024-11-25T12:18:03.673Z] =================================================================================================================== 00:20:02.593 [2024-11-25T12:18:03.673Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:02.593 12:18:03 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:02.593 12:18:03 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:02.593 12:18:03 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76154' 00:20:02.593 12:18:03 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 76154 00:20:02.593 12:18:03 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 76154 00:20:07.940 Remove shared memory files 00:20:07.940 12:18:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:07.940 12:18:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:20:07.940 12:18:08 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:20:07.940 12:18:08 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:20:07.940 12:18:08 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:20:07.940 12:18:08 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:20:07.940 12:18:08 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:20:07.940 12:18:08 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:20:07.940 ************************************ 00:20:07.940 END TEST ftl_bdevperf 00:20:07.940 ************************************ 00:20:07.940 00:20:07.940 real 0m24.051s 00:20:07.940 user 0m26.551s 00:20:07.940 sys 0m0.883s 00:20:07.940 12:18:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:07.940 12:18:08 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:07.940 12:18:08 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:20:07.940 12:18:08 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:20:07.940 12:18:08 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:07.940 12:18:08 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:07.940 ************************************ 00:20:07.940 START TEST ftl_trim 00:20:07.940 ************************************ 00:20:07.940 12:18:08 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:20:07.940 * Looking for test storage... 00:20:07.940 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:07.940 12:18:08 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:07.940 12:18:08 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lcov --version 00:20:07.940 12:18:08 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:07.940 12:18:08 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:07.940 12:18:08 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:07.940 12:18:08 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:07.940 12:18:08 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:07.940 12:18:08 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:20:07.940 12:18:08 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:20:07.940 12:18:08 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:20:07.940 12:18:08 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:20:07.940 12:18:08 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:20:07.940 12:18:08 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:20:07.940 12:18:08 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:20:07.940 12:18:08 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:07.940 12:18:08 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:20:07.940 12:18:08 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:20:07.940 12:18:08 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:07.940 12:18:08 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:07.940 12:18:08 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:20:07.940 12:18:08 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:20:07.940 12:18:08 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:07.940 12:18:08 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:20:07.940 12:18:08 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:20:07.940 12:18:08 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:20:07.940 12:18:08 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:20:07.940 12:18:08 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:07.940 12:18:08 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:20:07.940 12:18:08 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:20:07.940 12:18:08 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:07.940 12:18:08 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:07.940 12:18:08 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:20:07.940 12:18:08 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:07.940 12:18:08 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:07.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:07.940 --rc genhtml_branch_coverage=1 00:20:07.940 --rc genhtml_function_coverage=1 00:20:07.940 --rc genhtml_legend=1 00:20:07.940 --rc geninfo_all_blocks=1 00:20:07.940 --rc geninfo_unexecuted_blocks=1 00:20:07.940 00:20:07.940 ' 00:20:07.940 12:18:08 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:07.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:07.940 --rc genhtml_branch_coverage=1 00:20:07.940 --rc genhtml_function_coverage=1 00:20:07.940 --rc genhtml_legend=1 00:20:07.940 --rc geninfo_all_blocks=1 00:20:07.940 --rc geninfo_unexecuted_blocks=1 00:20:07.940 00:20:07.940 ' 00:20:07.940 12:18:08 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:07.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:07.940 --rc genhtml_branch_coverage=1 00:20:07.940 --rc genhtml_function_coverage=1 00:20:07.940 --rc genhtml_legend=1 00:20:07.940 --rc geninfo_all_blocks=1 00:20:07.940 --rc geninfo_unexecuted_blocks=1 00:20:07.940 00:20:07.940 ' 00:20:07.940 12:18:08 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:07.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:07.940 --rc genhtml_branch_coverage=1 00:20:07.940 --rc genhtml_function_coverage=1 00:20:07.940 --rc genhtml_legend=1 00:20:07.940 --rc geninfo_all_blocks=1 00:20:07.940 --rc geninfo_unexecuted_blocks=1 00:20:07.940 00:20:07.940 ' 00:20:07.940 12:18:08 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:07.940 12:18:08 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:20:07.940 12:18:08 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:07.940 12:18:08 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:07.940 12:18:08 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:07.940 12:18:08 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:07.940 12:18:08 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:07.940 12:18:08 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:07.940 12:18:08 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:07.940 12:18:08 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:07.940 12:18:08 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:07.940 12:18:08 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:07.940 12:18:08 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:07.940 12:18:08 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:07.940 12:18:08 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:07.940 12:18:08 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:07.940 12:18:08 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:07.940 12:18:08 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:07.940 12:18:08 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:07.940 12:18:08 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:07.940 12:18:08 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:07.940 12:18:08 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:07.940 12:18:08 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:07.940 12:18:08 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:07.940 12:18:08 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:07.941 12:18:08 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:07.941 12:18:08 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:07.941 12:18:08 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:07.941 12:18:08 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:07.941 12:18:08 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:07.941 12:18:08 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:20:07.941 12:18:08 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:20:07.941 12:18:08 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:20:07.941 12:18:08 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:20:07.941 12:18:08 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:20:07.941 12:18:08 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:20:07.941 12:18:08 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:20:07.941 12:18:08 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:20:07.941 12:18:08 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:07.941 12:18:08 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:07.941 12:18:08 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:20:07.941 12:18:08 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:20:07.941 12:18:08 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=76481 00:20:07.941 12:18:08 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 76481 00:20:07.941 12:18:08 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76481 ']' 00:20:07.941 12:18:08 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:07.941 12:18:08 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:07.941 12:18:08 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:07.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:07.941 12:18:08 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:07.941 12:18:08 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:07.941 [2024-11-25 12:18:08.463910] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:20:07.941 [2024-11-25 12:18:08.464647] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76481 ] 00:20:07.941 [2024-11-25 12:18:08.641042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:07.941 [2024-11-25 12:18:08.743353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:07.941 [2024-11-25 12:18:08.743756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:07.941 [2024-11-25 12:18:08.743776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:08.506 12:18:09 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:08.506 12:18:09 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:20:08.506 12:18:09 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:08.506 12:18:09 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:20:08.506 12:18:09 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:08.506 12:18:09 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:20:08.506 12:18:09 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:20:08.506 12:18:09 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:08.765 12:18:09 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:08.765 12:18:09 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:20:08.765 12:18:09 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:08.765 12:18:09 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:20:08.765 12:18:09 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:08.765 12:18:09 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:20:08.765 12:18:09 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:20:08.765 12:18:09 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:08.765 12:18:09 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:08.765 { 00:20:08.765 "name": "nvme0n1", 00:20:08.765 "aliases": [ 00:20:08.765 "26ab7f21-78dd-4bf7-9d4a-e00552e40c37" 00:20:08.765 ], 00:20:08.765 "product_name": "NVMe disk", 00:20:08.765 "block_size": 4096, 00:20:08.765 "num_blocks": 1310720, 00:20:08.765 "uuid": "26ab7f21-78dd-4bf7-9d4a-e00552e40c37", 00:20:08.765 "numa_id": -1, 00:20:08.765 "assigned_rate_limits": { 00:20:08.765 "rw_ios_per_sec": 0, 00:20:08.765 "rw_mbytes_per_sec": 0, 00:20:08.765 "r_mbytes_per_sec": 0, 00:20:08.765 "w_mbytes_per_sec": 0 00:20:08.765 }, 00:20:08.765 "claimed": true, 00:20:08.765 "claim_type": "read_many_write_one", 00:20:08.765 "zoned": false, 00:20:08.765 "supported_io_types": { 00:20:08.765 "read": true, 00:20:08.765 "write": true, 00:20:08.765 "unmap": true, 00:20:08.765 "flush": true, 00:20:08.765 "reset": true, 00:20:08.766 "nvme_admin": true, 00:20:08.766 "nvme_io": true, 00:20:08.766 "nvme_io_md": false, 00:20:08.766 "write_zeroes": true, 00:20:08.766 "zcopy": false, 00:20:08.766 "get_zone_info": false, 00:20:08.766 "zone_management": false, 00:20:08.766 "zone_append": false, 00:20:08.766 "compare": true, 00:20:08.766 "compare_and_write": false, 00:20:08.766 "abort": true, 00:20:08.766 "seek_hole": false, 00:20:08.766 "seek_data": false, 00:20:08.766 "copy": true, 00:20:08.766 "nvme_iov_md": false 00:20:08.766 }, 00:20:08.766 "driver_specific": { 00:20:08.766 "nvme": [ 00:20:08.766 { 00:20:08.766 "pci_address": "0000:00:11.0", 00:20:08.766 "trid": { 00:20:08.766 "trtype": "PCIe", 00:20:08.766 "traddr": "0000:00:11.0" 00:20:08.766 }, 00:20:08.766 "ctrlr_data": { 00:20:08.766 "cntlid": 0, 00:20:08.766 "vendor_id": "0x1b36", 00:20:08.766 "model_number": "QEMU NVMe Ctrl", 00:20:08.766 "serial_number": "12341", 00:20:08.766 "firmware_revision": "8.0.0", 00:20:08.766 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:08.766 "oacs": { 00:20:08.766 "security": 0, 00:20:08.766 "format": 1, 00:20:08.766 "firmware": 0, 00:20:08.766 "ns_manage": 1 00:20:08.766 }, 00:20:08.766 "multi_ctrlr": false, 00:20:08.766 "ana_reporting": false 00:20:08.766 }, 00:20:08.766 "vs": { 00:20:08.766 "nvme_version": "1.4" 00:20:08.766 }, 00:20:08.766 "ns_data": { 00:20:08.766 "id": 1, 00:20:08.766 "can_share": false 00:20:08.766 } 00:20:08.766 } 00:20:08.766 ], 00:20:08.766 "mp_policy": "active_passive" 00:20:08.766 } 00:20:08.766 } 00:20:08.766 ]' 00:20:08.766 12:18:09 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:08.766 12:18:09 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:20:08.766 12:18:09 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:09.024 12:18:09 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:20:09.024 12:18:09 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:20:09.024 12:18:09 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:20:09.024 12:18:09 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:20:09.024 12:18:09 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:09.024 12:18:09 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:20:09.024 12:18:09 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:09.024 12:18:09 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:09.024 12:18:10 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=f88924a3-f39e-4b0f-888c-5ef9d28838a6 00:20:09.024 12:18:10 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:20:09.024 12:18:10 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f88924a3-f39e-4b0f-888c-5ef9d28838a6 00:20:09.282 12:18:10 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:09.541 12:18:10 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=dfdd407b-cc17-4a91-9380-be1d66e1a5bc 00:20:09.541 12:18:10 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u dfdd407b-cc17-4a91-9380-be1d66e1a5bc 00:20:09.799 12:18:10 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=2d53d5d8-45b5-4169-876d-6d70785b4f8d 00:20:09.799 12:18:10 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 2d53d5d8-45b5-4169-876d-6d70785b4f8d 00:20:09.799 12:18:10 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:20:09.799 12:18:10 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:09.799 12:18:10 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=2d53d5d8-45b5-4169-876d-6d70785b4f8d 00:20:09.799 12:18:10 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:20:09.799 12:18:10 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 2d53d5d8-45b5-4169-876d-6d70785b4f8d 00:20:09.799 12:18:10 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=2d53d5d8-45b5-4169-876d-6d70785b4f8d 00:20:09.799 12:18:10 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:09.799 12:18:10 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:20:09.799 12:18:10 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:20:09.799 12:18:10 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2d53d5d8-45b5-4169-876d-6d70785b4f8d 00:20:10.058 12:18:10 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:10.058 { 00:20:10.058 "name": "2d53d5d8-45b5-4169-876d-6d70785b4f8d", 00:20:10.058 "aliases": [ 00:20:10.058 "lvs/nvme0n1p0" 00:20:10.058 ], 00:20:10.058 "product_name": "Logical Volume", 00:20:10.058 "block_size": 4096, 00:20:10.058 "num_blocks": 26476544, 00:20:10.058 "uuid": "2d53d5d8-45b5-4169-876d-6d70785b4f8d", 00:20:10.058 "assigned_rate_limits": { 00:20:10.058 "rw_ios_per_sec": 0, 00:20:10.058 "rw_mbytes_per_sec": 0, 00:20:10.058 "r_mbytes_per_sec": 0, 00:20:10.058 "w_mbytes_per_sec": 0 00:20:10.058 }, 00:20:10.058 "claimed": false, 00:20:10.058 "zoned": false, 00:20:10.058 "supported_io_types": { 00:20:10.058 "read": true, 00:20:10.058 "write": true, 00:20:10.058 "unmap": true, 00:20:10.058 "flush": false, 00:20:10.058 "reset": true, 00:20:10.058 "nvme_admin": false, 00:20:10.058 "nvme_io": false, 00:20:10.058 "nvme_io_md": false, 00:20:10.058 "write_zeroes": true, 00:20:10.058 "zcopy": false, 00:20:10.058 "get_zone_info": false, 00:20:10.058 "zone_management": false, 00:20:10.058 "zone_append": false, 00:20:10.058 "compare": false, 00:20:10.058 "compare_and_write": false, 00:20:10.058 "abort": false, 00:20:10.058 "seek_hole": true, 00:20:10.058 "seek_data": true, 00:20:10.058 "copy": false, 00:20:10.058 "nvme_iov_md": false 00:20:10.058 }, 00:20:10.058 "driver_specific": { 00:20:10.058 "lvol": { 00:20:10.058 "lvol_store_uuid": "dfdd407b-cc17-4a91-9380-be1d66e1a5bc", 00:20:10.058 "base_bdev": "nvme0n1", 00:20:10.058 "thin_provision": true, 00:20:10.058 "num_allocated_clusters": 0, 00:20:10.058 "snapshot": false, 00:20:10.058 "clone": false, 00:20:10.058 "esnap_clone": false 00:20:10.058 } 00:20:10.058 } 00:20:10.058 } 00:20:10.058 ]' 00:20:10.058 12:18:10 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:10.058 12:18:10 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:20:10.058 12:18:10 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:10.058 12:18:10 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:10.058 12:18:10 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:10.058 12:18:10 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:20:10.058 12:18:10 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:20:10.058 12:18:10 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:20:10.058 12:18:10 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:10.317 12:18:11 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:10.317 12:18:11 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:10.317 12:18:11 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 2d53d5d8-45b5-4169-876d-6d70785b4f8d 00:20:10.317 12:18:11 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=2d53d5d8-45b5-4169-876d-6d70785b4f8d 00:20:10.317 12:18:11 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:10.317 12:18:11 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:20:10.317 12:18:11 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:20:10.317 12:18:11 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2d53d5d8-45b5-4169-876d-6d70785b4f8d 00:20:10.575 12:18:11 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:10.575 { 00:20:10.575 "name": "2d53d5d8-45b5-4169-876d-6d70785b4f8d", 00:20:10.575 "aliases": [ 00:20:10.575 "lvs/nvme0n1p0" 00:20:10.575 ], 00:20:10.575 "product_name": "Logical Volume", 00:20:10.575 "block_size": 4096, 00:20:10.575 "num_blocks": 26476544, 00:20:10.575 "uuid": "2d53d5d8-45b5-4169-876d-6d70785b4f8d", 00:20:10.575 "assigned_rate_limits": { 00:20:10.575 "rw_ios_per_sec": 0, 00:20:10.575 "rw_mbytes_per_sec": 0, 00:20:10.575 "r_mbytes_per_sec": 0, 00:20:10.575 "w_mbytes_per_sec": 0 00:20:10.575 }, 00:20:10.575 "claimed": false, 00:20:10.575 "zoned": false, 00:20:10.575 "supported_io_types": { 00:20:10.575 "read": true, 00:20:10.575 "write": true, 00:20:10.575 "unmap": true, 00:20:10.575 "flush": false, 00:20:10.575 "reset": true, 00:20:10.575 "nvme_admin": false, 00:20:10.575 "nvme_io": false, 00:20:10.575 "nvme_io_md": false, 00:20:10.575 "write_zeroes": true, 00:20:10.575 "zcopy": false, 00:20:10.575 "get_zone_info": false, 00:20:10.575 "zone_management": false, 00:20:10.575 "zone_append": false, 00:20:10.575 "compare": false, 00:20:10.575 "compare_and_write": false, 00:20:10.575 "abort": false, 00:20:10.575 "seek_hole": true, 00:20:10.575 "seek_data": true, 00:20:10.575 "copy": false, 00:20:10.575 "nvme_iov_md": false 00:20:10.575 }, 00:20:10.575 "driver_specific": { 00:20:10.575 "lvol": { 00:20:10.575 "lvol_store_uuid": "dfdd407b-cc17-4a91-9380-be1d66e1a5bc", 00:20:10.575 "base_bdev": "nvme0n1", 00:20:10.575 "thin_provision": true, 00:20:10.575 "num_allocated_clusters": 0, 00:20:10.575 "snapshot": false, 00:20:10.575 "clone": false, 00:20:10.575 "esnap_clone": false 00:20:10.575 } 00:20:10.575 } 00:20:10.575 } 00:20:10.575 ]' 00:20:10.575 12:18:11 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:10.575 12:18:11 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:20:10.575 12:18:11 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:10.575 12:18:11 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:10.575 12:18:11 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:10.575 12:18:11 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:20:10.575 12:18:11 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:20:10.575 12:18:11 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:10.834 12:18:11 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:20:10.834 12:18:11 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:20:10.834 12:18:11 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 2d53d5d8-45b5-4169-876d-6d70785b4f8d 00:20:10.834 12:18:11 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=2d53d5d8-45b5-4169-876d-6d70785b4f8d 00:20:10.834 12:18:11 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:10.834 12:18:11 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:20:10.834 12:18:11 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:20:10.834 12:18:11 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2d53d5d8-45b5-4169-876d-6d70785b4f8d 00:20:10.834 12:18:11 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:10.834 { 00:20:10.834 "name": "2d53d5d8-45b5-4169-876d-6d70785b4f8d", 00:20:10.834 "aliases": [ 00:20:10.834 "lvs/nvme0n1p0" 00:20:10.834 ], 00:20:10.834 "product_name": "Logical Volume", 00:20:10.834 "block_size": 4096, 00:20:10.834 "num_blocks": 26476544, 00:20:10.834 "uuid": "2d53d5d8-45b5-4169-876d-6d70785b4f8d", 00:20:10.834 "assigned_rate_limits": { 00:20:10.834 "rw_ios_per_sec": 0, 00:20:10.834 "rw_mbytes_per_sec": 0, 00:20:10.834 "r_mbytes_per_sec": 0, 00:20:10.834 "w_mbytes_per_sec": 0 00:20:10.834 }, 00:20:10.834 "claimed": false, 00:20:10.834 "zoned": false, 00:20:10.834 "supported_io_types": { 00:20:10.834 "read": true, 00:20:10.834 "write": true, 00:20:10.834 "unmap": true, 00:20:10.834 "flush": false, 00:20:10.834 "reset": true, 00:20:10.834 "nvme_admin": false, 00:20:10.834 "nvme_io": false, 00:20:10.834 "nvme_io_md": false, 00:20:10.834 "write_zeroes": true, 00:20:10.834 "zcopy": false, 00:20:10.834 "get_zone_info": false, 00:20:10.834 "zone_management": false, 00:20:10.834 "zone_append": false, 00:20:10.834 "compare": false, 00:20:10.834 "compare_and_write": false, 00:20:10.834 "abort": false, 00:20:10.834 "seek_hole": true, 00:20:10.834 "seek_data": true, 00:20:10.834 "copy": false, 00:20:10.834 "nvme_iov_md": false 00:20:10.834 }, 00:20:10.834 "driver_specific": { 00:20:10.834 "lvol": { 00:20:10.834 "lvol_store_uuid": "dfdd407b-cc17-4a91-9380-be1d66e1a5bc", 00:20:10.834 "base_bdev": "nvme0n1", 00:20:10.834 "thin_provision": true, 00:20:10.834 "num_allocated_clusters": 0, 00:20:10.834 "snapshot": false, 00:20:10.834 "clone": false, 00:20:10.834 "esnap_clone": false 00:20:10.834 } 00:20:10.834 } 00:20:10.834 } 00:20:10.834 ]' 00:20:10.834 12:18:11 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:10.834 12:18:11 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:20:10.834 12:18:11 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:11.093 12:18:11 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:11.093 12:18:11 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:11.093 12:18:11 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:20:11.093 12:18:11 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:20:11.093 12:18:11 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 2d53d5d8-45b5-4169-876d-6d70785b4f8d -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:20:11.093 [2024-11-25 12:18:12.100693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.093 [2024-11-25 12:18:12.100742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:11.093 [2024-11-25 12:18:12.100766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:11.093 [2024-11-25 12:18:12.100778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.093 [2024-11-25 12:18:12.103799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.093 [2024-11-25 12:18:12.103974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:11.093 [2024-11-25 12:18:12.103997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.993 ms 00:20:11.093 [2024-11-25 12:18:12.104006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.093 [2024-11-25 12:18:12.104161] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:11.093 [2024-11-25 12:18:12.104915] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:11.093 [2024-11-25 12:18:12.104942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.093 [2024-11-25 12:18:12.104962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:11.093 [2024-11-25 12:18:12.104974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.790 ms 00:20:11.093 [2024-11-25 12:18:12.104981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.093 [2024-11-25 12:18:12.105388] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID c35f6ec4-de26-452c-bcbe-87dd6023e02d 00:20:11.093 [2024-11-25 12:18:12.106442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.093 [2024-11-25 12:18:12.106477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:11.093 [2024-11-25 12:18:12.106488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:20:11.093 [2024-11-25 12:18:12.106498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.093 [2024-11-25 12:18:12.111755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.093 [2024-11-25 12:18:12.111882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:11.093 [2024-11-25 12:18:12.111900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.177 ms 00:20:11.093 [2024-11-25 12:18:12.111912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.093 [2024-11-25 12:18:12.112064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.093 [2024-11-25 12:18:12.112084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:11.093 [2024-11-25 12:18:12.112093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:11.093 [2024-11-25 12:18:12.112105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.093 [2024-11-25 12:18:12.112142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.093 [2024-11-25 12:18:12.112152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:11.093 [2024-11-25 12:18:12.112160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:11.093 [2024-11-25 12:18:12.112169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.093 [2024-11-25 12:18:12.112203] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:11.093 [2024-11-25 12:18:12.116025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.093 [2024-11-25 12:18:12.116059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:11.093 [2024-11-25 12:18:12.116075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.826 ms 00:20:11.093 [2024-11-25 12:18:12.116083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.093 [2024-11-25 12:18:12.116148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.093 [2024-11-25 12:18:12.116159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:11.093 [2024-11-25 12:18:12.116168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:11.093 [2024-11-25 12:18:12.116188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.094 [2024-11-25 12:18:12.116215] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:11.094 [2024-11-25 12:18:12.116348] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:11.094 [2024-11-25 12:18:12.116362] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:11.094 [2024-11-25 12:18:12.116374] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:11.094 [2024-11-25 12:18:12.116385] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:11.094 [2024-11-25 12:18:12.116393] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:11.094 [2024-11-25 12:18:12.116403] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:11.094 [2024-11-25 12:18:12.116411] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:11.094 [2024-11-25 12:18:12.116419] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:11.094 [2024-11-25 12:18:12.116428] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:11.094 [2024-11-25 12:18:12.116438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.094 [2024-11-25 12:18:12.116446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:11.094 [2024-11-25 12:18:12.116455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.224 ms 00:20:11.094 [2024-11-25 12:18:12.116462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.094 [2024-11-25 12:18:12.116555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.094 [2024-11-25 12:18:12.116563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:11.094 [2024-11-25 12:18:12.116572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:20:11.094 [2024-11-25 12:18:12.116579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.094 [2024-11-25 12:18:12.116720] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:11.094 [2024-11-25 12:18:12.116729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:11.094 [2024-11-25 12:18:12.116743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:11.094 [2024-11-25 12:18:12.116755] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:11.094 [2024-11-25 12:18:12.116769] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:11.094 [2024-11-25 12:18:12.116777] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:11.094 [2024-11-25 12:18:12.116785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:11.094 [2024-11-25 12:18:12.116792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:11.094 [2024-11-25 12:18:12.116800] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:11.094 [2024-11-25 12:18:12.116807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:11.094 [2024-11-25 12:18:12.116815] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:11.094 [2024-11-25 12:18:12.116822] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:11.094 [2024-11-25 12:18:12.116830] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:11.094 [2024-11-25 12:18:12.116836] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:11.094 [2024-11-25 12:18:12.116845] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:11.094 [2024-11-25 12:18:12.116852] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:11.094 [2024-11-25 12:18:12.116861] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:11.094 [2024-11-25 12:18:12.116867] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:11.094 [2024-11-25 12:18:12.116877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:11.094 [2024-11-25 12:18:12.116884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:11.094 [2024-11-25 12:18:12.116892] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:11.094 [2024-11-25 12:18:12.116899] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:11.094 [2024-11-25 12:18:12.116907] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:11.094 [2024-11-25 12:18:12.116913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:11.094 [2024-11-25 12:18:12.116922] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:11.094 [2024-11-25 12:18:12.116928] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:11.094 [2024-11-25 12:18:12.116937] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:11.094 [2024-11-25 12:18:12.116961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:11.094 [2024-11-25 12:18:12.116970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:11.094 [2024-11-25 12:18:12.116977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:11.094 [2024-11-25 12:18:12.116985] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:11.094 [2024-11-25 12:18:12.116992] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:11.094 [2024-11-25 12:18:12.117001] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:11.094 [2024-11-25 12:18:12.117008] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:11.094 [2024-11-25 12:18:12.117016] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:11.094 [2024-11-25 12:18:12.117023] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:11.094 [2024-11-25 12:18:12.117031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:11.094 [2024-11-25 12:18:12.117038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:11.094 [2024-11-25 12:18:12.117046] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:11.094 [2024-11-25 12:18:12.117052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:11.094 [2024-11-25 12:18:12.117060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:11.094 [2024-11-25 12:18:12.117067] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:11.094 [2024-11-25 12:18:12.117074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:11.094 [2024-11-25 12:18:12.117080] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:11.094 [2024-11-25 12:18:12.117089] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:11.094 [2024-11-25 12:18:12.117096] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:11.094 [2024-11-25 12:18:12.117106] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:11.094 [2024-11-25 12:18:12.117114] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:11.094 [2024-11-25 12:18:12.117124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:11.094 [2024-11-25 12:18:12.117130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:11.094 [2024-11-25 12:18:12.117139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:11.094 [2024-11-25 12:18:12.117145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:11.094 [2024-11-25 12:18:12.117153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:11.094 [2024-11-25 12:18:12.117163] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:11.094 [2024-11-25 12:18:12.117173] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:11.094 [2024-11-25 12:18:12.117181] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:11.094 [2024-11-25 12:18:12.117192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:11.094 [2024-11-25 12:18:12.117199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:11.094 [2024-11-25 12:18:12.117207] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:11.094 [2024-11-25 12:18:12.117214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:11.094 [2024-11-25 12:18:12.117223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:11.094 [2024-11-25 12:18:12.117230] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:11.094 [2024-11-25 12:18:12.117238] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:11.094 [2024-11-25 12:18:12.117245] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:11.094 [2024-11-25 12:18:12.117255] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:11.094 [2024-11-25 12:18:12.117262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:11.094 [2024-11-25 12:18:12.117270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:11.094 [2024-11-25 12:18:12.117277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:11.094 [2024-11-25 12:18:12.117286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:11.094 [2024-11-25 12:18:12.117293] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:11.094 [2024-11-25 12:18:12.117307] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:11.094 [2024-11-25 12:18:12.117315] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:11.094 [2024-11-25 12:18:12.117324] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:11.094 [2024-11-25 12:18:12.117331] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:11.094 [2024-11-25 12:18:12.117359] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:11.094 [2024-11-25 12:18:12.117368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.094 [2024-11-25 12:18:12.117377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:11.094 [2024-11-25 12:18:12.117384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.729 ms 00:20:11.094 [2024-11-25 12:18:12.117393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.094 [2024-11-25 12:18:12.117465] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:20:11.095 [2024-11-25 12:18:12.117477] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:20:13.627 [2024-11-25 12:18:14.127182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.627 [2024-11-25 12:18:14.127246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:13.627 [2024-11-25 12:18:14.127261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2009.708 ms 00:20:13.627 [2024-11-25 12:18:14.127272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.627 [2024-11-25 12:18:14.152676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.627 [2024-11-25 12:18:14.152732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:13.627 [2024-11-25 12:18:14.152745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.161 ms 00:20:13.627 [2024-11-25 12:18:14.152755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.627 [2024-11-25 12:18:14.152895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.627 [2024-11-25 12:18:14.152907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:13.627 [2024-11-25 12:18:14.152916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:20:13.627 [2024-11-25 12:18:14.152927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.627 [2024-11-25 12:18:14.196647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.627 [2024-11-25 12:18:14.196922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:13.627 [2024-11-25 12:18:14.196976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.649 ms 00:20:13.627 [2024-11-25 12:18:14.196999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.627 [2024-11-25 12:18:14.197165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.627 [2024-11-25 12:18:14.197190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:13.627 [2024-11-25 12:18:14.197205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:13.627 [2024-11-25 12:18:14.197220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.627 [2024-11-25 12:18:14.197650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.627 [2024-11-25 12:18:14.197690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:13.627 [2024-11-25 12:18:14.197706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.382 ms 00:20:13.627 [2024-11-25 12:18:14.197723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.627 [2024-11-25 12:18:14.197926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.627 [2024-11-25 12:18:14.197943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:13.627 [2024-11-25 12:18:14.197977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.153 ms 00:20:13.627 [2024-11-25 12:18:14.197996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.627 [2024-11-25 12:18:14.213737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.627 [2024-11-25 12:18:14.213771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:13.627 [2024-11-25 12:18:14.213781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.676 ms 00:20:13.627 [2024-11-25 12:18:14.213790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.627 [2024-11-25 12:18:14.224996] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:13.627 [2024-11-25 12:18:14.238857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.627 [2024-11-25 12:18:14.238893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:13.627 [2024-11-25 12:18:14.238905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.964 ms 00:20:13.627 [2024-11-25 12:18:14.238913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.627 [2024-11-25 12:18:14.302308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.627 [2024-11-25 12:18:14.302359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:13.627 [2024-11-25 12:18:14.302375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.302 ms 00:20:13.627 [2024-11-25 12:18:14.302383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.627 [2024-11-25 12:18:14.302598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.627 [2024-11-25 12:18:14.302609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:13.627 [2024-11-25 12:18:14.302622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:20:13.627 [2024-11-25 12:18:14.302629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.627 [2024-11-25 12:18:14.325113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.627 [2024-11-25 12:18:14.325150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:13.627 [2024-11-25 12:18:14.325163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.454 ms 00:20:13.627 [2024-11-25 12:18:14.325172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.627 [2024-11-25 12:18:14.347379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.627 [2024-11-25 12:18:14.347528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:13.627 [2024-11-25 12:18:14.347549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.157 ms 00:20:13.627 [2024-11-25 12:18:14.347556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.627 [2024-11-25 12:18:14.348153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.627 [2024-11-25 12:18:14.348166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:13.627 [2024-11-25 12:18:14.348176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.546 ms 00:20:13.627 [2024-11-25 12:18:14.348184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.627 [2024-11-25 12:18:14.417458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.627 [2024-11-25 12:18:14.417506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:13.627 [2024-11-25 12:18:14.417526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.241 ms 00:20:13.627 [2024-11-25 12:18:14.417534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.627 [2024-11-25 12:18:14.441697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.627 [2024-11-25 12:18:14.441739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:13.627 [2024-11-25 12:18:14.441754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.068 ms 00:20:13.627 [2024-11-25 12:18:14.441762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.627 [2024-11-25 12:18:14.464991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.627 [2024-11-25 12:18:14.465028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:20:13.627 [2024-11-25 12:18:14.465041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.170 ms 00:20:13.627 [2024-11-25 12:18:14.465048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.627 [2024-11-25 12:18:14.488115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.627 [2024-11-25 12:18:14.488155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:13.627 [2024-11-25 12:18:14.488170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.994 ms 00:20:13.627 [2024-11-25 12:18:14.488189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.627 [2024-11-25 12:18:14.488251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.627 [2024-11-25 12:18:14.488263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:13.627 [2024-11-25 12:18:14.488276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:13.627 [2024-11-25 12:18:14.488283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.627 [2024-11-25 12:18:14.488354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.627 [2024-11-25 12:18:14.488363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:13.627 [2024-11-25 12:18:14.488373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:20:13.627 [2024-11-25 12:18:14.488380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.627 [2024-11-25 12:18:14.489127] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:13.627 { 00:20:13.627 "name": "ftl0", 00:20:13.627 "uuid": "c35f6ec4-de26-452c-bcbe-87dd6023e02d" 00:20:13.627 } 00:20:13.627 [2024-11-25 12:18:14.492142] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2388.147 ms, result 0 00:20:13.627 [2024-11-25 12:18:14.492743] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:13.627 12:18:14 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:20:13.627 12:18:14 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:20:13.627 12:18:14 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:13.627 12:18:14 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:20:13.627 12:18:14 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:13.627 12:18:14 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:13.627 12:18:14 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:20:13.887 12:18:14 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:20:13.887 [ 00:20:13.887 { 00:20:13.887 "name": "ftl0", 00:20:13.887 "aliases": [ 00:20:13.887 "c35f6ec4-de26-452c-bcbe-87dd6023e02d" 00:20:13.887 ], 00:20:13.887 "product_name": "FTL disk", 00:20:13.887 "block_size": 4096, 00:20:13.887 "num_blocks": 23592960, 00:20:13.887 "uuid": "c35f6ec4-de26-452c-bcbe-87dd6023e02d", 00:20:13.887 "assigned_rate_limits": { 00:20:13.887 "rw_ios_per_sec": 0, 00:20:13.887 "rw_mbytes_per_sec": 0, 00:20:13.887 "r_mbytes_per_sec": 0, 00:20:13.887 "w_mbytes_per_sec": 0 00:20:13.887 }, 00:20:13.887 "claimed": false, 00:20:13.887 "zoned": false, 00:20:13.887 "supported_io_types": { 00:20:13.887 "read": true, 00:20:13.887 "write": true, 00:20:13.887 "unmap": true, 00:20:13.887 "flush": true, 00:20:13.887 "reset": false, 00:20:13.887 "nvme_admin": false, 00:20:13.887 "nvme_io": false, 00:20:13.887 "nvme_io_md": false, 00:20:13.887 "write_zeroes": true, 00:20:13.887 "zcopy": false, 00:20:13.887 "get_zone_info": false, 00:20:13.887 "zone_management": false, 00:20:13.887 "zone_append": false, 00:20:13.887 "compare": false, 00:20:13.887 "compare_and_write": false, 00:20:13.887 "abort": false, 00:20:13.887 "seek_hole": false, 00:20:13.887 "seek_data": false, 00:20:13.887 "copy": false, 00:20:13.887 "nvme_iov_md": false 00:20:13.887 }, 00:20:13.887 "driver_specific": { 00:20:13.887 "ftl": { 00:20:13.887 "base_bdev": "2d53d5d8-45b5-4169-876d-6d70785b4f8d", 00:20:13.887 "cache": "nvc0n1p0" 00:20:13.887 } 00:20:13.887 } 00:20:13.887 } 00:20:13.887 ] 00:20:13.887 12:18:14 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:20:13.887 12:18:14 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:20:13.887 12:18:14 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:14.146 12:18:15 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:20:14.146 12:18:15 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:20:14.404 12:18:15 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:20:14.404 { 00:20:14.404 "name": "ftl0", 00:20:14.404 "aliases": [ 00:20:14.404 "c35f6ec4-de26-452c-bcbe-87dd6023e02d" 00:20:14.404 ], 00:20:14.404 "product_name": "FTL disk", 00:20:14.404 "block_size": 4096, 00:20:14.404 "num_blocks": 23592960, 00:20:14.404 "uuid": "c35f6ec4-de26-452c-bcbe-87dd6023e02d", 00:20:14.404 "assigned_rate_limits": { 00:20:14.404 "rw_ios_per_sec": 0, 00:20:14.404 "rw_mbytes_per_sec": 0, 00:20:14.404 "r_mbytes_per_sec": 0, 00:20:14.404 "w_mbytes_per_sec": 0 00:20:14.404 }, 00:20:14.404 "claimed": false, 00:20:14.404 "zoned": false, 00:20:14.404 "supported_io_types": { 00:20:14.404 "read": true, 00:20:14.404 "write": true, 00:20:14.404 "unmap": true, 00:20:14.404 "flush": true, 00:20:14.404 "reset": false, 00:20:14.404 "nvme_admin": false, 00:20:14.404 "nvme_io": false, 00:20:14.404 "nvme_io_md": false, 00:20:14.404 "write_zeroes": true, 00:20:14.404 "zcopy": false, 00:20:14.404 "get_zone_info": false, 00:20:14.404 "zone_management": false, 00:20:14.404 "zone_append": false, 00:20:14.404 "compare": false, 00:20:14.404 "compare_and_write": false, 00:20:14.404 "abort": false, 00:20:14.404 "seek_hole": false, 00:20:14.404 "seek_data": false, 00:20:14.404 "copy": false, 00:20:14.404 "nvme_iov_md": false 00:20:14.404 }, 00:20:14.404 "driver_specific": { 00:20:14.404 "ftl": { 00:20:14.404 "base_bdev": "2d53d5d8-45b5-4169-876d-6d70785b4f8d", 00:20:14.404 "cache": "nvc0n1p0" 00:20:14.404 } 00:20:14.404 } 00:20:14.404 } 00:20:14.404 ]' 00:20:14.404 12:18:15 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:20:14.404 12:18:15 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:20:14.404 12:18:15 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:20:14.664 [2024-11-25 12:18:15.532321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.664 [2024-11-25 12:18:15.532370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:14.664 [2024-11-25 12:18:15.532385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:14.664 [2024-11-25 12:18:15.532397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.664 [2024-11-25 12:18:15.532431] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:14.664 [2024-11-25 12:18:15.535036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.664 [2024-11-25 12:18:15.535066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:14.664 [2024-11-25 12:18:15.535080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.587 ms 00:20:14.664 [2024-11-25 12:18:15.535089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.664 [2024-11-25 12:18:15.535593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.664 [2024-11-25 12:18:15.535620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:14.664 [2024-11-25 12:18:15.535631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.452 ms 00:20:14.664 [2024-11-25 12:18:15.535638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.664 [2024-11-25 12:18:15.539281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.664 [2024-11-25 12:18:15.539303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:14.664 [2024-11-25 12:18:15.539313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.615 ms 00:20:14.664 [2024-11-25 12:18:15.539320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.664 [2024-11-25 12:18:15.546331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.664 [2024-11-25 12:18:15.546459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:14.664 [2024-11-25 12:18:15.546479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.969 ms 00:20:14.664 [2024-11-25 12:18:15.546487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.664 [2024-11-25 12:18:15.570069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.664 [2024-11-25 12:18:15.570103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:14.664 [2024-11-25 12:18:15.570118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.502 ms 00:20:14.664 [2024-11-25 12:18:15.570126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.664 [2024-11-25 12:18:15.584386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.664 [2024-11-25 12:18:15.584430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:14.664 [2024-11-25 12:18:15.584445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.200 ms 00:20:14.664 [2024-11-25 12:18:15.584456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.664 [2024-11-25 12:18:15.584673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.664 [2024-11-25 12:18:15.584690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:14.664 [2024-11-25 12:18:15.584700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:20:14.664 [2024-11-25 12:18:15.584708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.664 [2024-11-25 12:18:15.608369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.664 [2024-11-25 12:18:15.608417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:14.664 [2024-11-25 12:18:15.608431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.633 ms 00:20:14.664 [2024-11-25 12:18:15.608439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.664 [2024-11-25 12:18:15.631205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.664 [2024-11-25 12:18:15.631242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:14.664 [2024-11-25 12:18:15.631257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.656 ms 00:20:14.664 [2024-11-25 12:18:15.631266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.664 [2024-11-25 12:18:15.653728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.664 [2024-11-25 12:18:15.653896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:14.664 [2024-11-25 12:18:15.653916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.389 ms 00:20:14.664 [2024-11-25 12:18:15.653924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.664 [2024-11-25 12:18:15.676308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.664 [2024-11-25 12:18:15.676445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:14.664 [2024-11-25 12:18:15.676465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.248 ms 00:20:14.664 [2024-11-25 12:18:15.676473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.664 [2024-11-25 12:18:15.676537] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:14.664 [2024-11-25 12:18:15.676551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:14.664 [2024-11-25 12:18:15.676563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:14.664 [2024-11-25 12:18:15.676571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:14.664 [2024-11-25 12:18:15.676580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:14.664 [2024-11-25 12:18:15.676587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:14.664 [2024-11-25 12:18:15.676599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:14.664 [2024-11-25 12:18:15.676606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:14.664 [2024-11-25 12:18:15.676614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:14.664 [2024-11-25 12:18:15.676622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:14.664 [2024-11-25 12:18:15.676631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:14.664 [2024-11-25 12:18:15.676638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:14.664 [2024-11-25 12:18:15.676647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:14.664 [2024-11-25 12:18:15.676654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.676663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.676670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.676678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.676686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.676695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.676702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.676711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.676718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.676744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.676751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.676760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.676767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.676776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.676783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.676793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.676801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.676810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.676817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.676826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.676834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.676844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.676851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.676860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.676867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.676878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.676885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.676894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.676902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.676911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.676918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.676927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.676934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.676944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.676970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.676979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.676986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.676996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.677003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.677012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.677019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.677030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.677037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.677046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.677054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.677064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.677071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.677080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.677087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.677096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.677103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.677112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.677119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.677129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.677136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.677145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.677152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.677163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.677176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.677187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.677194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.677203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.677210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.677220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.677227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.677236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.677244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.677253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.677260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.677269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.677277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.677287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.677294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.677305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.677312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.677321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.677329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.677338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.677359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.677369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.677376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.677384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.677392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.677400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.677408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.677417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.677425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.677435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:14.665 [2024-11-25 12:18:15.677451] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:14.665 [2024-11-25 12:18:15.677461] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c35f6ec4-de26-452c-bcbe-87dd6023e02d 00:20:14.665 [2024-11-25 12:18:15.677469] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:14.665 [2024-11-25 12:18:15.677477] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:14.665 [2024-11-25 12:18:15.677484] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:14.665 [2024-11-25 12:18:15.677493] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:14.665 [2024-11-25 12:18:15.677502] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:14.665 [2024-11-25 12:18:15.677511] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:14.665 [2024-11-25 12:18:15.677518] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:14.665 [2024-11-25 12:18:15.677526] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:14.665 [2024-11-25 12:18:15.677532] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:14.665 [2024-11-25 12:18:15.677540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.665 [2024-11-25 12:18:15.677547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:14.665 [2024-11-25 12:18:15.677557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.005 ms 00:20:14.665 [2024-11-25 12:18:15.677568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.665 [2024-11-25 12:18:15.690247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.665 [2024-11-25 12:18:15.690283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:14.665 [2024-11-25 12:18:15.690300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.629 ms 00:20:14.665 [2024-11-25 12:18:15.690308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.665 [2024-11-25 12:18:15.690688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.665 [2024-11-25 12:18:15.690708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:14.665 [2024-11-25 12:18:15.690719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:20:14.665 [2024-11-25 12:18:15.690726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.665 [2024-11-25 12:18:15.734063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:14.665 [2024-11-25 12:18:15.734114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:14.665 [2024-11-25 12:18:15.734127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:14.665 [2024-11-25 12:18:15.734135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.665 [2024-11-25 12:18:15.734255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:14.665 [2024-11-25 12:18:15.734264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:14.665 [2024-11-25 12:18:15.734273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:14.665 [2024-11-25 12:18:15.734281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.665 [2024-11-25 12:18:15.734343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:14.665 [2024-11-25 12:18:15.734352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:14.665 [2024-11-25 12:18:15.734365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:14.665 [2024-11-25 12:18:15.734372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.665 [2024-11-25 12:18:15.734401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:14.665 [2024-11-25 12:18:15.734409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:14.665 [2024-11-25 12:18:15.734418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:14.665 [2024-11-25 12:18:15.734425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.924 [2024-11-25 12:18:15.814175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:14.924 [2024-11-25 12:18:15.814221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:14.924 [2024-11-25 12:18:15.814234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:14.924 [2024-11-25 12:18:15.814242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.924 [2024-11-25 12:18:15.876198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:14.924 [2024-11-25 12:18:15.876242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:14.924 [2024-11-25 12:18:15.876254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:14.924 [2024-11-25 12:18:15.876262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.924 [2024-11-25 12:18:15.876355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:14.924 [2024-11-25 12:18:15.876364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:14.924 [2024-11-25 12:18:15.876388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:14.924 [2024-11-25 12:18:15.876398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.924 [2024-11-25 12:18:15.876445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:14.924 [2024-11-25 12:18:15.876454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:14.924 [2024-11-25 12:18:15.876463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:14.924 [2024-11-25 12:18:15.876470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.924 [2024-11-25 12:18:15.876581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:14.924 [2024-11-25 12:18:15.876590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:14.924 [2024-11-25 12:18:15.876600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:14.924 [2024-11-25 12:18:15.876607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.924 [2024-11-25 12:18:15.876656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:14.924 [2024-11-25 12:18:15.876665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:14.924 [2024-11-25 12:18:15.876674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:14.924 [2024-11-25 12:18:15.876681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.924 [2024-11-25 12:18:15.876728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:14.924 [2024-11-25 12:18:15.876736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:14.924 [2024-11-25 12:18:15.876747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:14.924 [2024-11-25 12:18:15.876754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.924 [2024-11-25 12:18:15.876810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:14.925 [2024-11-25 12:18:15.876819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:14.925 [2024-11-25 12:18:15.876828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:14.925 [2024-11-25 12:18:15.876835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.925 [2024-11-25 12:18:15.877017] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 344.683 ms, result 0 00:20:14.925 true 00:20:14.925 12:18:15 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 76481 00:20:14.925 12:18:15 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76481 ']' 00:20:14.925 12:18:15 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76481 00:20:14.925 12:18:15 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:20:14.925 12:18:15 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:14.925 12:18:15 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76481 00:20:14.925 killing process with pid 76481 00:20:14.925 12:18:15 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:14.925 12:18:15 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:14.925 12:18:15 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76481' 00:20:14.925 12:18:15 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76481 00:20:14.925 12:18:15 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76481 00:20:21.481 12:18:21 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:20:21.738 65536+0 records in 00:20:21.738 65536+0 records out 00:20:21.738 268435456 bytes (268 MB, 256 MiB) copied, 1.08025 s, 248 MB/s 00:20:21.738 12:18:22 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:21.995 [2024-11-25 12:18:22.840154] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:20:21.995 [2024-11-25 12:18:22.840271] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76657 ] 00:20:21.995 [2024-11-25 12:18:22.999996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:22.252 [2024-11-25 12:18:23.098929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:22.512 [2024-11-25 12:18:23.366598] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:22.512 [2024-11-25 12:18:23.366826] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:22.512 [2024-11-25 12:18:23.522409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.512 [2024-11-25 12:18:23.522466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:22.512 [2024-11-25 12:18:23.522480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:22.512 [2024-11-25 12:18:23.522489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.512 [2024-11-25 12:18:23.525861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.512 [2024-11-25 12:18:23.526046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:22.512 [2024-11-25 12:18:23.526074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.353 ms 00:20:22.512 [2024-11-25 12:18:23.526089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.512 [2024-11-25 12:18:23.526249] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:22.512 [2024-11-25 12:18:23.527180] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:22.512 [2024-11-25 12:18:23.527214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.512 [2024-11-25 12:18:23.527227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:22.512 [2024-11-25 12:18:23.527242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.973 ms 00:20:22.512 [2024-11-25 12:18:23.527255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.512 [2024-11-25 12:18:23.528438] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:22.512 [2024-11-25 12:18:23.544464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.512 [2024-11-25 12:18:23.544516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:22.512 [2024-11-25 12:18:23.544535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.027 ms 00:20:22.512 [2024-11-25 12:18:23.544548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.512 [2024-11-25 12:18:23.544642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.512 [2024-11-25 12:18:23.544653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:22.512 [2024-11-25 12:18:23.544661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:20:22.512 [2024-11-25 12:18:23.544669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.512 [2024-11-25 12:18:23.549750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.512 [2024-11-25 12:18:23.549915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:22.512 [2024-11-25 12:18:23.549938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.039 ms 00:20:22.512 [2024-11-25 12:18:23.549974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.512 [2024-11-25 12:18:23.550097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.512 [2024-11-25 12:18:23.550111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:22.512 [2024-11-25 12:18:23.550125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:20:22.512 [2024-11-25 12:18:23.550136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.512 [2024-11-25 12:18:23.550172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.512 [2024-11-25 12:18:23.550186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:22.512 [2024-11-25 12:18:23.550199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:22.512 [2024-11-25 12:18:23.550212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.512 [2024-11-25 12:18:23.550245] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:22.512 [2024-11-25 12:18:23.554769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.512 [2024-11-25 12:18:23.554807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:22.512 [2024-11-25 12:18:23.554822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.532 ms 00:20:22.512 [2024-11-25 12:18:23.554836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.512 [2024-11-25 12:18:23.554923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.512 [2024-11-25 12:18:23.554938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:22.512 [2024-11-25 12:18:23.554969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:22.512 [2024-11-25 12:18:23.554981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.512 [2024-11-25 12:18:23.555012] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:22.512 [2024-11-25 12:18:23.555039] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:22.512 [2024-11-25 12:18:23.555087] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:22.512 [2024-11-25 12:18:23.555111] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:22.512 [2024-11-25 12:18:23.555259] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:22.512 [2024-11-25 12:18:23.555275] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:22.512 [2024-11-25 12:18:23.555291] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:22.512 [2024-11-25 12:18:23.555305] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:22.512 [2024-11-25 12:18:23.555322] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:22.512 [2024-11-25 12:18:23.555334] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:22.512 [2024-11-25 12:18:23.555346] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:22.512 [2024-11-25 12:18:23.555358] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:22.512 [2024-11-25 12:18:23.555371] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:22.512 [2024-11-25 12:18:23.555385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.512 [2024-11-25 12:18:23.555398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:22.512 [2024-11-25 12:18:23.555412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.375 ms 00:20:22.512 [2024-11-25 12:18:23.555425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.512 [2024-11-25 12:18:23.555554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.512 [2024-11-25 12:18:23.555568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:22.512 [2024-11-25 12:18:23.555584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:20:22.512 [2024-11-25 12:18:23.555595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.512 [2024-11-25 12:18:23.555738] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:22.512 [2024-11-25 12:18:23.555754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:22.512 [2024-11-25 12:18:23.555767] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:22.513 [2024-11-25 12:18:23.555781] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:22.513 [2024-11-25 12:18:23.555795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:22.513 [2024-11-25 12:18:23.555808] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:22.513 [2024-11-25 12:18:23.555819] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:22.513 [2024-11-25 12:18:23.555829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:22.513 [2024-11-25 12:18:23.555840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:22.513 [2024-11-25 12:18:23.555851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:22.513 [2024-11-25 12:18:23.555861] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:22.513 [2024-11-25 12:18:23.555871] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:22.513 [2024-11-25 12:18:23.555882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:22.513 [2024-11-25 12:18:23.555900] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:22.513 [2024-11-25 12:18:23.555911] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:22.513 [2024-11-25 12:18:23.555921] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:22.513 [2024-11-25 12:18:23.555932] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:22.513 [2024-11-25 12:18:23.555942] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:22.513 [2024-11-25 12:18:23.555972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:22.513 [2024-11-25 12:18:23.555983] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:22.513 [2024-11-25 12:18:23.555996] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:22.513 [2024-11-25 12:18:23.556009] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:22.513 [2024-11-25 12:18:23.556021] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:22.513 [2024-11-25 12:18:23.556033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:22.513 [2024-11-25 12:18:23.556048] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:22.513 [2024-11-25 12:18:23.556061] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:22.513 [2024-11-25 12:18:23.556073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:22.513 [2024-11-25 12:18:23.556084] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:22.513 [2024-11-25 12:18:23.556094] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:22.513 [2024-11-25 12:18:23.556104] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:22.513 [2024-11-25 12:18:23.556115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:22.513 [2024-11-25 12:18:23.556125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:22.513 [2024-11-25 12:18:23.556136] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:22.513 [2024-11-25 12:18:23.556146] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:22.513 [2024-11-25 12:18:23.556157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:22.513 [2024-11-25 12:18:23.556167] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:22.513 [2024-11-25 12:18:23.556179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:22.513 [2024-11-25 12:18:23.556189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:22.513 [2024-11-25 12:18:23.556200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:22.513 [2024-11-25 12:18:23.556210] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:22.513 [2024-11-25 12:18:23.556221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:22.513 [2024-11-25 12:18:23.556232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:22.513 [2024-11-25 12:18:23.556242] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:22.513 [2024-11-25 12:18:23.556252] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:22.513 [2024-11-25 12:18:23.556265] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:22.513 [2024-11-25 12:18:23.556279] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:22.513 [2024-11-25 12:18:23.556296] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:22.513 [2024-11-25 12:18:23.556310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:22.513 [2024-11-25 12:18:23.556322] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:22.513 [2024-11-25 12:18:23.556334] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:22.513 [2024-11-25 12:18:23.556345] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:22.513 [2024-11-25 12:18:23.556357] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:22.513 [2024-11-25 12:18:23.556370] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:22.513 [2024-11-25 12:18:23.556384] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:22.513 [2024-11-25 12:18:23.556399] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:22.513 [2024-11-25 12:18:23.556413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:22.513 [2024-11-25 12:18:23.556426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:22.513 [2024-11-25 12:18:23.556438] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:22.513 [2024-11-25 12:18:23.556449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:22.513 [2024-11-25 12:18:23.556460] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:22.513 [2024-11-25 12:18:23.556472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:22.513 [2024-11-25 12:18:23.556483] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:22.513 [2024-11-25 12:18:23.556495] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:22.513 [2024-11-25 12:18:23.556506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:22.513 [2024-11-25 12:18:23.556518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:22.513 [2024-11-25 12:18:23.556529] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:22.513 [2024-11-25 12:18:23.556540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:22.513 [2024-11-25 12:18:23.556551] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:22.513 [2024-11-25 12:18:23.556564] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:22.513 [2024-11-25 12:18:23.556575] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:22.513 [2024-11-25 12:18:23.556587] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:22.513 [2024-11-25 12:18:23.556599] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:22.513 [2024-11-25 12:18:23.556610] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:22.513 [2024-11-25 12:18:23.556622] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:22.513 [2024-11-25 12:18:23.556634] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:22.513 [2024-11-25 12:18:23.556647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.513 [2024-11-25 12:18:23.556660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:22.513 [2024-11-25 12:18:23.556677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.006 ms 00:20:22.513 [2024-11-25 12:18:23.556690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.513 [2024-11-25 12:18:23.584047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.513 [2024-11-25 12:18:23.584086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:22.514 [2024-11-25 12:18:23.584096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.287 ms 00:20:22.514 [2024-11-25 12:18:23.584104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.514 [2024-11-25 12:18:23.584231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.514 [2024-11-25 12:18:23.584245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:22.514 [2024-11-25 12:18:23.584253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:20:22.514 [2024-11-25 12:18:23.584260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.771 [2024-11-25 12:18:23.622709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.771 [2024-11-25 12:18:23.622755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:22.771 [2024-11-25 12:18:23.622768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.427 ms 00:20:22.772 [2024-11-25 12:18:23.622779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.772 [2024-11-25 12:18:23.622879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.772 [2024-11-25 12:18:23.622891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:22.772 [2024-11-25 12:18:23.622900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:22.772 [2024-11-25 12:18:23.622907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.772 [2024-11-25 12:18:23.623231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.772 [2024-11-25 12:18:23.623254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:22.772 [2024-11-25 12:18:23.623264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:20:22.772 [2024-11-25 12:18:23.623277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.772 [2024-11-25 12:18:23.623401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.772 [2024-11-25 12:18:23.623411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:22.772 [2024-11-25 12:18:23.623419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:20:22.772 [2024-11-25 12:18:23.623426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.772 [2024-11-25 12:18:23.636331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.772 [2024-11-25 12:18:23.636363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:22.772 [2024-11-25 12:18:23.636373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.885 ms 00:20:22.772 [2024-11-25 12:18:23.636380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.772 [2024-11-25 12:18:23.648572] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:20:22.772 [2024-11-25 12:18:23.648609] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:22.772 [2024-11-25 12:18:23.648620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.772 [2024-11-25 12:18:23.648629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:22.772 [2024-11-25 12:18:23.648637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.140 ms 00:20:22.772 [2024-11-25 12:18:23.648644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.772 [2024-11-25 12:18:23.672455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.772 [2024-11-25 12:18:23.672493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:22.772 [2024-11-25 12:18:23.672511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.739 ms 00:20:22.772 [2024-11-25 12:18:23.672520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.772 [2024-11-25 12:18:23.683743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.772 [2024-11-25 12:18:23.683775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:22.772 [2024-11-25 12:18:23.683785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.145 ms 00:20:22.772 [2024-11-25 12:18:23.683792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.772 [2024-11-25 12:18:23.694877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.772 [2024-11-25 12:18:23.694906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:22.772 [2024-11-25 12:18:23.694916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.023 ms 00:20:22.772 [2024-11-25 12:18:23.694923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.772 [2024-11-25 12:18:23.695529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.772 [2024-11-25 12:18:23.695554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:22.772 [2024-11-25 12:18:23.695564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.514 ms 00:20:22.772 [2024-11-25 12:18:23.695571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.772 [2024-11-25 12:18:23.749439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.772 [2024-11-25 12:18:23.749493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:22.772 [2024-11-25 12:18:23.749507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.844 ms 00:20:22.772 [2024-11-25 12:18:23.749514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.772 [2024-11-25 12:18:23.759944] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:22.772 [2024-11-25 12:18:23.773579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.772 [2024-11-25 12:18:23.773616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:22.772 [2024-11-25 12:18:23.773627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.954 ms 00:20:22.772 [2024-11-25 12:18:23.773635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.772 [2024-11-25 12:18:23.773718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.772 [2024-11-25 12:18:23.773731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:22.772 [2024-11-25 12:18:23.773739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:22.772 [2024-11-25 12:18:23.773747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.772 [2024-11-25 12:18:23.773793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.772 [2024-11-25 12:18:23.773802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:22.772 [2024-11-25 12:18:23.773810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:20:22.772 [2024-11-25 12:18:23.773817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.772 [2024-11-25 12:18:23.773841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.772 [2024-11-25 12:18:23.773849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:22.772 [2024-11-25 12:18:23.773858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:22.772 [2024-11-25 12:18:23.773866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.772 [2024-11-25 12:18:23.773896] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:22.772 [2024-11-25 12:18:23.773906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.772 [2024-11-25 12:18:23.773913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:22.772 [2024-11-25 12:18:23.773920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:22.772 [2024-11-25 12:18:23.773927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.772 [2024-11-25 12:18:23.796575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.772 [2024-11-25 12:18:23.796614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:22.772 [2024-11-25 12:18:23.796625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.630 ms 00:20:22.772 [2024-11-25 12:18:23.796633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.772 [2024-11-25 12:18:23.796721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.772 [2024-11-25 12:18:23.796732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:22.772 [2024-11-25 12:18:23.796740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:20:22.772 [2024-11-25 12:18:23.796747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.772 [2024-11-25 12:18:23.798099] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:22.772 [2024-11-25 12:18:23.801123] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 275.400 ms, result 0 00:20:22.772 [2024-11-25 12:18:23.801779] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:22.772 [2024-11-25 12:18:23.814885] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:24.151  [2024-11-25T12:18:26.165Z] Copying: 43/256 [MB] (43 MBps) [2024-11-25T12:18:27.098Z] Copying: 87/256 [MB] (43 MBps) [2024-11-25T12:18:28.031Z] Copying: 135/256 [MB] (47 MBps) [2024-11-25T12:18:29.029Z] Copying: 179/256 [MB] (43 MBps) [2024-11-25T12:18:29.594Z] Copying: 222/256 [MB] (43 MBps) [2024-11-25T12:18:29.594Z] Copying: 256/256 [MB] (average 44 MBps)[2024-11-25 12:18:29.580448] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:28.514 [2024-11-25 12:18:29.589499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.514 [2024-11-25 12:18:29.589541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:28.514 [2024-11-25 12:18:29.589555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:28.514 [2024-11-25 12:18:29.589565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.514 [2024-11-25 12:18:29.589588] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:28.514 [2024-11-25 12:18:29.592151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.514 [2024-11-25 12:18:29.592188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:28.514 [2024-11-25 12:18:29.592200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.551 ms 00:20:28.514 [2024-11-25 12:18:29.592209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.774 [2024-11-25 12:18:29.593761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.774 [2024-11-25 12:18:29.593793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:28.774 [2024-11-25 12:18:29.593802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.516 ms 00:20:28.774 [2024-11-25 12:18:29.593811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.774 [2024-11-25 12:18:29.600504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.774 [2024-11-25 12:18:29.600538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:28.774 [2024-11-25 12:18:29.600553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.676 ms 00:20:28.774 [2024-11-25 12:18:29.600561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.774 [2024-11-25 12:18:29.607629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.774 [2024-11-25 12:18:29.607660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:28.774 [2024-11-25 12:18:29.607670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.037 ms 00:20:28.774 [2024-11-25 12:18:29.607681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.774 [2024-11-25 12:18:29.630908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.774 [2024-11-25 12:18:29.630958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:28.774 [2024-11-25 12:18:29.630971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.168 ms 00:20:28.774 [2024-11-25 12:18:29.630979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.774 [2024-11-25 12:18:29.644198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.774 [2024-11-25 12:18:29.644241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:28.774 [2024-11-25 12:18:29.644261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.177 ms 00:20:28.774 [2024-11-25 12:18:29.644272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.774 [2024-11-25 12:18:29.644422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.774 [2024-11-25 12:18:29.644432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:28.774 [2024-11-25 12:18:29.644441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:20:28.774 [2024-11-25 12:18:29.644448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.774 [2024-11-25 12:18:29.667542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.774 [2024-11-25 12:18:29.667591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:28.774 [2024-11-25 12:18:29.667603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.075 ms 00:20:28.774 [2024-11-25 12:18:29.667612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.774 [2024-11-25 12:18:29.690786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.774 [2024-11-25 12:18:29.690832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:28.774 [2024-11-25 12:18:29.690844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.129 ms 00:20:28.774 [2024-11-25 12:18:29.690852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.774 [2024-11-25 12:18:29.713042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.774 [2024-11-25 12:18:29.713088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:28.774 [2024-11-25 12:18:29.713099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.146 ms 00:20:28.774 [2024-11-25 12:18:29.713107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.774 [2024-11-25 12:18:29.735439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.774 [2024-11-25 12:18:29.735490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:28.774 [2024-11-25 12:18:29.735501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.260 ms 00:20:28.774 [2024-11-25 12:18:29.735509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.774 [2024-11-25 12:18:29.735554] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:28.774 [2024-11-25 12:18:29.735577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:28.774 [2024-11-25 12:18:29.735587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:28.774 [2024-11-25 12:18:29.735596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:28.774 [2024-11-25 12:18:29.735603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:28.774 [2024-11-25 12:18:29.735611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:28.774 [2024-11-25 12:18:29.735618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:28.774 [2024-11-25 12:18:29.735626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:28.774 [2024-11-25 12:18:29.735633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:28.774 [2024-11-25 12:18:29.735641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:28.774 [2024-11-25 12:18:29.735649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:28.774 [2024-11-25 12:18:29.735656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:28.775 [2024-11-25 12:18:29.735664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:28.775 [2024-11-25 12:18:29.735672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:28.775 [2024-11-25 12:18:29.735680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:28.775 [2024-11-25 12:18:29.735688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:28.775 [2024-11-25 12:18:29.735695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:28.775 [2024-11-25 12:18:29.735703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:28.775 [2024-11-25 12:18:29.735710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:28.775 [2024-11-25 12:18:29.735718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:28.775 [2024-11-25 12:18:29.735725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:28.775 [2024-11-25 12:18:29.735733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:28.775 [2024-11-25 12:18:29.735740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:28.775 [2024-11-25 12:18:29.735747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:28.775 [2024-11-25 12:18:29.735754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:28.775 [2024-11-25 12:18:29.735762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:28.775 [2024-11-25 12:18:29.735769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:28.775 [2024-11-25 12:18:29.735776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:28.775 [2024-11-25 12:18:29.735783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:28.775 [2024-11-25 12:18:29.735791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:28.775 [2024-11-25 12:18:29.735798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:28.775 [2024-11-25 12:18:29.735805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:28.775 [2024-11-25 12:18:29.735812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:28.775 [2024-11-25 12:18:29.735820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:28.775 [2024-11-25 12:18:29.735827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:28.775 [2024-11-25 12:18:29.735834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:28.775 [2024-11-25 12:18:29.735841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:28.775 [2024-11-25 12:18:29.735849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:28.775 [2024-11-25 12:18:29.735869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:28.775 [2024-11-25 12:18:29.735878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:28.775 [2024-11-25 12:18:29.735885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:28.775 [2024-11-25 12:18:29.735893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:28.775 [2024-11-25 12:18:29.735900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:28.775 [2024-11-25 12:18:29.735908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:28.775 [2024-11-25 12:18:29.735916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:28.775 [2024-11-25 12:18:29.735923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:28.775 [2024-11-25 12:18:29.735931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:28.775 [2024-11-25 12:18:29.735938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:28.775 [2024-11-25 12:18:29.735961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:28.775 [2024-11-25 12:18:29.735969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:28.775 [2024-11-25 12:18:29.735977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:28.775 [2024-11-25 12:18:29.735984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:28.775 [2024-11-25 12:18:29.735992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:28.775 [2024-11-25 12:18:29.736000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:28.775 [2024-11-25 12:18:29.736008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:28.775 [2024-11-25 12:18:29.736015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:28.775 [2024-11-25 12:18:29.736023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:28.775 [2024-11-25 12:18:29.736030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:28.775 [2024-11-25 12:18:29.736038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:28.775 [2024-11-25 12:18:29.736045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:28.775 [2024-11-25 12:18:29.736052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:28.775 [2024-11-25 12:18:29.736060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:28.775 [2024-11-25 12:18:29.736068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:28.775 [2024-11-25 12:18:29.736075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:28.775 [2024-11-25 12:18:29.736082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:28.775 [2024-11-25 12:18:29.736090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:28.775 [2024-11-25 12:18:29.736097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:28.775 [2024-11-25 12:18:29.736105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:28.775 [2024-11-25 12:18:29.736112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:28.775 [2024-11-25 12:18:29.736119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:28.775 [2024-11-25 12:18:29.736127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:28.775 [2024-11-25 12:18:29.736136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:28.775 [2024-11-25 12:18:29.736144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:28.775 [2024-11-25 12:18:29.736151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:28.776 [2024-11-25 12:18:29.736159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:28.776 [2024-11-25 12:18:29.736166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:28.776 [2024-11-25 12:18:29.736173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:28.776 [2024-11-25 12:18:29.736181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:28.776 [2024-11-25 12:18:29.736188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:28.776 [2024-11-25 12:18:29.736196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:28.776 [2024-11-25 12:18:29.736203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:28.776 [2024-11-25 12:18:29.736210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:28.776 [2024-11-25 12:18:29.736218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:28.776 [2024-11-25 12:18:29.736226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:28.776 [2024-11-25 12:18:29.736233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:28.776 [2024-11-25 12:18:29.736241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:28.776 [2024-11-25 12:18:29.736248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:28.776 [2024-11-25 12:18:29.736255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:28.776 [2024-11-25 12:18:29.736263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:28.776 [2024-11-25 12:18:29.736270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:28.776 [2024-11-25 12:18:29.736277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:28.776 [2024-11-25 12:18:29.736285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:28.776 [2024-11-25 12:18:29.736292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:28.776 [2024-11-25 12:18:29.736299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:28.776 [2024-11-25 12:18:29.736306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:28.776 [2024-11-25 12:18:29.736315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:28.776 [2024-11-25 12:18:29.736322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:28.776 [2024-11-25 12:18:29.736337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:28.776 [2024-11-25 12:18:29.736345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:28.776 [2024-11-25 12:18:29.736352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:28.776 [2024-11-25 12:18:29.736359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:28.776 [2024-11-25 12:18:29.736375] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:28.776 [2024-11-25 12:18:29.736383] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c35f6ec4-de26-452c-bcbe-87dd6023e02d 00:20:28.776 [2024-11-25 12:18:29.736395] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:28.776 [2024-11-25 12:18:29.736403] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:28.776 [2024-11-25 12:18:29.736410] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:28.776 [2024-11-25 12:18:29.736417] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:28.776 [2024-11-25 12:18:29.736424] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:28.776 [2024-11-25 12:18:29.736432] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:28.776 [2024-11-25 12:18:29.736439] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:28.776 [2024-11-25 12:18:29.736445] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:28.776 [2024-11-25 12:18:29.736451] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:28.776 [2024-11-25 12:18:29.736458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.776 [2024-11-25 12:18:29.736466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:28.776 [2024-11-25 12:18:29.736477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.905 ms 00:20:28.776 [2024-11-25 12:18:29.736484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.776 [2024-11-25 12:18:29.748706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.776 [2024-11-25 12:18:29.748749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:28.776 [2024-11-25 12:18:29.748762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.189 ms 00:20:28.776 [2024-11-25 12:18:29.748771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.776 [2024-11-25 12:18:29.749150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.776 [2024-11-25 12:18:29.749177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:28.776 [2024-11-25 12:18:29.749186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.337 ms 00:20:28.776 [2024-11-25 12:18:29.749193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.776 [2024-11-25 12:18:29.783809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.776 [2024-11-25 12:18:29.783862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:28.776 [2024-11-25 12:18:29.783874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.776 [2024-11-25 12:18:29.783881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.776 [2024-11-25 12:18:29.784001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.776 [2024-11-25 12:18:29.784015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:28.776 [2024-11-25 12:18:29.784023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.776 [2024-11-25 12:18:29.784030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.776 [2024-11-25 12:18:29.784074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.776 [2024-11-25 12:18:29.784083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:28.776 [2024-11-25 12:18:29.784091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.776 [2024-11-25 12:18:29.784099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.776 [2024-11-25 12:18:29.784115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.776 [2024-11-25 12:18:29.784122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:28.776 [2024-11-25 12:18:29.784132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.776 [2024-11-25 12:18:29.784140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.033 [2024-11-25 12:18:29.862231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.033 [2024-11-25 12:18:29.862284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:29.033 [2024-11-25 12:18:29.862294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.033 [2024-11-25 12:18:29.862302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.033 [2024-11-25 12:18:29.925727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.033 [2024-11-25 12:18:29.925776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:29.033 [2024-11-25 12:18:29.925793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.033 [2024-11-25 12:18:29.925800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.033 [2024-11-25 12:18:29.925865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.033 [2024-11-25 12:18:29.925874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:29.033 [2024-11-25 12:18:29.925881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.033 [2024-11-25 12:18:29.925888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.033 [2024-11-25 12:18:29.925916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.033 [2024-11-25 12:18:29.925924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:29.033 [2024-11-25 12:18:29.925931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.033 [2024-11-25 12:18:29.925942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.033 [2024-11-25 12:18:29.926036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.033 [2024-11-25 12:18:29.926046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:29.033 [2024-11-25 12:18:29.926054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.033 [2024-11-25 12:18:29.926061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.033 [2024-11-25 12:18:29.926090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.033 [2024-11-25 12:18:29.926099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:29.033 [2024-11-25 12:18:29.926106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.033 [2024-11-25 12:18:29.926113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.033 [2024-11-25 12:18:29.926150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.033 [2024-11-25 12:18:29.926159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:29.033 [2024-11-25 12:18:29.926166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.033 [2024-11-25 12:18:29.926173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.033 [2024-11-25 12:18:29.926212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.033 [2024-11-25 12:18:29.926221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:29.033 [2024-11-25 12:18:29.926229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.033 [2024-11-25 12:18:29.926238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.033 [2024-11-25 12:18:29.926364] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 336.849 ms, result 0 00:20:29.965 00:20:29.965 00:20:29.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:29.965 12:18:30 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=76743 00:20:29.965 12:18:30 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 76743 00:20:29.965 12:18:30 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76743 ']' 00:20:29.965 12:18:30 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:29.965 12:18:30 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:29.965 12:18:30 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:29.965 12:18:30 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:29.965 12:18:30 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:29.965 12:18:30 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:20:29.965 [2024-11-25 12:18:31.008820] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:20:29.965 [2024-11-25 12:18:31.008984] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76743 ] 00:20:30.223 [2024-11-25 12:18:31.169102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.223 [2024-11-25 12:18:31.290780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:31.156 12:18:31 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:31.156 12:18:31 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:20:31.156 12:18:31 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:20:31.156 [2024-11-25 12:18:32.118185] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:31.156 [2024-11-25 12:18:32.118256] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:31.414 [2024-11-25 12:18:32.284005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.414 [2024-11-25 12:18:32.284055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:31.414 [2024-11-25 12:18:32.284070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:31.414 [2024-11-25 12:18:32.284079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.414 [2024-11-25 12:18:32.286781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.414 [2024-11-25 12:18:32.286822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:31.414 [2024-11-25 12:18:32.286834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.683 ms 00:20:31.414 [2024-11-25 12:18:32.286842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.414 [2024-11-25 12:18:32.287163] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:31.414 [2024-11-25 12:18:32.288172] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:31.414 [2024-11-25 12:18:32.288200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.414 [2024-11-25 12:18:32.288208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:31.414 [2024-11-25 12:18:32.288220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.074 ms 00:20:31.414 [2024-11-25 12:18:32.288227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.414 [2024-11-25 12:18:32.289661] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:31.414 [2024-11-25 12:18:32.302206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.414 [2024-11-25 12:18:32.302260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:31.414 [2024-11-25 12:18:32.302274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.547 ms 00:20:31.414 [2024-11-25 12:18:32.302285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.414 [2024-11-25 12:18:32.302391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.414 [2024-11-25 12:18:32.302405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:31.414 [2024-11-25 12:18:32.302414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:20:31.414 [2024-11-25 12:18:32.302423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.414 [2024-11-25 12:18:32.307634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.414 [2024-11-25 12:18:32.307682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:31.414 [2024-11-25 12:18:32.307692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.159 ms 00:20:31.414 [2024-11-25 12:18:32.307702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.414 [2024-11-25 12:18:32.307822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.414 [2024-11-25 12:18:32.307845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:31.414 [2024-11-25 12:18:32.307854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:20:31.414 [2024-11-25 12:18:32.307863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.415 [2024-11-25 12:18:32.307894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.415 [2024-11-25 12:18:32.307905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:31.415 [2024-11-25 12:18:32.307912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:31.415 [2024-11-25 12:18:32.307921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.415 [2024-11-25 12:18:32.307962] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:31.415 [2024-11-25 12:18:32.311323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.415 [2024-11-25 12:18:32.311355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:31.415 [2024-11-25 12:18:32.311367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.383 ms 00:20:31.415 [2024-11-25 12:18:32.311374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.415 [2024-11-25 12:18:32.311413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.415 [2024-11-25 12:18:32.311421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:31.415 [2024-11-25 12:18:32.311430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:31.415 [2024-11-25 12:18:32.311439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.415 [2024-11-25 12:18:32.311460] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:31.415 [2024-11-25 12:18:32.311478] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:31.415 [2024-11-25 12:18:32.311517] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:31.415 [2024-11-25 12:18:32.311533] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:31.415 [2024-11-25 12:18:32.311635] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:31.415 [2024-11-25 12:18:32.311647] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:31.415 [2024-11-25 12:18:32.311661] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:31.415 [2024-11-25 12:18:32.311673] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:31.415 [2024-11-25 12:18:32.311684] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:31.415 [2024-11-25 12:18:32.311692] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:31.415 [2024-11-25 12:18:32.311701] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:31.415 [2024-11-25 12:18:32.311708] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:31.415 [2024-11-25 12:18:32.311718] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:31.415 [2024-11-25 12:18:32.311725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.415 [2024-11-25 12:18:32.311734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:31.415 [2024-11-25 12:18:32.311741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:20:31.415 [2024-11-25 12:18:32.311750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.415 [2024-11-25 12:18:32.311851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.415 [2024-11-25 12:18:32.311868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:31.415 [2024-11-25 12:18:32.311876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:31.415 [2024-11-25 12:18:32.311885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.415 [2024-11-25 12:18:32.312050] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:31.415 [2024-11-25 12:18:32.312072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:31.415 [2024-11-25 12:18:32.312080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:31.415 [2024-11-25 12:18:32.312090] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:31.415 [2024-11-25 12:18:32.312098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:31.415 [2024-11-25 12:18:32.312106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:31.415 [2024-11-25 12:18:32.312113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:31.415 [2024-11-25 12:18:32.312125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:31.415 [2024-11-25 12:18:32.312132] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:31.415 [2024-11-25 12:18:32.312141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:31.415 [2024-11-25 12:18:32.312147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:31.415 [2024-11-25 12:18:32.312155] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:31.415 [2024-11-25 12:18:32.312161] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:31.415 [2024-11-25 12:18:32.312169] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:31.415 [2024-11-25 12:18:32.312176] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:31.415 [2024-11-25 12:18:32.312184] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:31.415 [2024-11-25 12:18:32.312190] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:31.415 [2024-11-25 12:18:32.312198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:31.415 [2024-11-25 12:18:32.312205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:31.415 [2024-11-25 12:18:32.312214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:31.415 [2024-11-25 12:18:32.312226] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:31.415 [2024-11-25 12:18:32.312234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:31.415 [2024-11-25 12:18:32.312240] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:31.415 [2024-11-25 12:18:32.312250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:31.415 [2024-11-25 12:18:32.312256] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:31.415 [2024-11-25 12:18:32.312264] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:31.415 [2024-11-25 12:18:32.312270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:31.415 [2024-11-25 12:18:32.312278] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:31.415 [2024-11-25 12:18:32.312284] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:31.415 [2024-11-25 12:18:32.312292] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:31.415 [2024-11-25 12:18:32.312298] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:31.415 [2024-11-25 12:18:32.312306] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:31.415 [2024-11-25 12:18:32.312313] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:31.415 [2024-11-25 12:18:32.312322] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:31.415 [2024-11-25 12:18:32.312329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:31.415 [2024-11-25 12:18:32.312337] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:31.415 [2024-11-25 12:18:32.312343] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:31.415 [2024-11-25 12:18:32.312351] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:31.415 [2024-11-25 12:18:32.312358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:31.415 [2024-11-25 12:18:32.312369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:31.415 [2024-11-25 12:18:32.312375] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:31.415 [2024-11-25 12:18:32.312383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:31.415 [2024-11-25 12:18:32.312390] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:31.415 [2024-11-25 12:18:32.312398] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:31.415 [2024-11-25 12:18:32.312405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:31.415 [2024-11-25 12:18:32.312415] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:31.415 [2024-11-25 12:18:32.312422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:31.415 [2024-11-25 12:18:32.312431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:31.415 [2024-11-25 12:18:32.312438] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:31.415 [2024-11-25 12:18:32.312446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:31.415 [2024-11-25 12:18:32.312452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:31.415 [2024-11-25 12:18:32.312460] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:31.415 [2024-11-25 12:18:32.312467] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:31.415 [2024-11-25 12:18:32.312476] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:31.415 [2024-11-25 12:18:32.312486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:31.415 [2024-11-25 12:18:32.312497] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:31.415 [2024-11-25 12:18:32.312504] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:31.415 [2024-11-25 12:18:32.312513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:31.415 [2024-11-25 12:18:32.312520] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:31.415 [2024-11-25 12:18:32.312529] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:31.415 [2024-11-25 12:18:32.312536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:31.415 [2024-11-25 12:18:32.312544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:31.415 [2024-11-25 12:18:32.312551] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:31.415 [2024-11-25 12:18:32.312559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:31.415 [2024-11-25 12:18:32.312566] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:31.416 [2024-11-25 12:18:32.312574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:31.416 [2024-11-25 12:18:32.312581] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:31.416 [2024-11-25 12:18:32.312589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:31.416 [2024-11-25 12:18:32.312596] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:31.416 [2024-11-25 12:18:32.312604] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:31.416 [2024-11-25 12:18:32.312612] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:31.416 [2024-11-25 12:18:32.312623] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:31.416 [2024-11-25 12:18:32.312630] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:31.416 [2024-11-25 12:18:32.312639] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:31.416 [2024-11-25 12:18:32.312646] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:31.416 [2024-11-25 12:18:32.312655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.416 [2024-11-25 12:18:32.312662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:31.416 [2024-11-25 12:18:32.312671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.734 ms 00:20:31.416 [2024-11-25 12:18:32.312678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.416 [2024-11-25 12:18:32.338943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.416 [2024-11-25 12:18:32.339014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:31.416 [2024-11-25 12:18:32.339028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.192 ms 00:20:31.416 [2024-11-25 12:18:32.339036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.416 [2024-11-25 12:18:32.339180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.416 [2024-11-25 12:18:32.339190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:31.416 [2024-11-25 12:18:32.339200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:20:31.416 [2024-11-25 12:18:32.339208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.416 [2024-11-25 12:18:32.369584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.416 [2024-11-25 12:18:32.369634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:31.416 [2024-11-25 12:18:32.369652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.351 ms 00:20:31.416 [2024-11-25 12:18:32.369660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.416 [2024-11-25 12:18:32.369744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.416 [2024-11-25 12:18:32.369753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:31.416 [2024-11-25 12:18:32.369764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:31.416 [2024-11-25 12:18:32.369771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.416 [2024-11-25 12:18:32.370117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.416 [2024-11-25 12:18:32.370144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:31.416 [2024-11-25 12:18:32.370155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.323 ms 00:20:31.416 [2024-11-25 12:18:32.370165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.416 [2024-11-25 12:18:32.370293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.416 [2024-11-25 12:18:32.370311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:31.416 [2024-11-25 12:18:32.370322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:20:31.416 [2024-11-25 12:18:32.370329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.416 [2024-11-25 12:18:32.385181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.416 [2024-11-25 12:18:32.385222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:31.416 [2024-11-25 12:18:32.385236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.828 ms 00:20:31.416 [2024-11-25 12:18:32.385244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.416 [2024-11-25 12:18:32.398124] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:31.416 [2024-11-25 12:18:32.398175] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:31.416 [2024-11-25 12:18:32.398191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.416 [2024-11-25 12:18:32.398200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:31.416 [2024-11-25 12:18:32.398213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.789 ms 00:20:31.416 [2024-11-25 12:18:32.398221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.416 [2024-11-25 12:18:32.422997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.416 [2024-11-25 12:18:32.423051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:31.416 [2024-11-25 12:18:32.423066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.670 ms 00:20:31.416 [2024-11-25 12:18:32.423076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.416 [2024-11-25 12:18:32.435257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.416 [2024-11-25 12:18:32.435299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:31.416 [2024-11-25 12:18:32.435315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.072 ms 00:20:31.416 [2024-11-25 12:18:32.435323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.416 [2024-11-25 12:18:32.446861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.416 [2024-11-25 12:18:32.446904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:31.416 [2024-11-25 12:18:32.446917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.450 ms 00:20:31.416 [2024-11-25 12:18:32.446927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.416 [2024-11-25 12:18:32.447580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.416 [2024-11-25 12:18:32.447605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:31.416 [2024-11-25 12:18:32.447616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.528 ms 00:20:31.416 [2024-11-25 12:18:32.447624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.673 [2024-11-25 12:18:32.518025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.673 [2024-11-25 12:18:32.518087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:31.673 [2024-11-25 12:18:32.518105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.372 ms 00:20:31.673 [2024-11-25 12:18:32.518114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.673 [2024-11-25 12:18:32.528995] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:31.673 [2024-11-25 12:18:32.543379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.673 [2024-11-25 12:18:32.543436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:31.673 [2024-11-25 12:18:32.543452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.140 ms 00:20:31.673 [2024-11-25 12:18:32.543463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.673 [2024-11-25 12:18:32.543555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.673 [2024-11-25 12:18:32.543567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:31.673 [2024-11-25 12:18:32.543576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:31.673 [2024-11-25 12:18:32.543585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.673 [2024-11-25 12:18:32.543632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.673 [2024-11-25 12:18:32.543642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:31.673 [2024-11-25 12:18:32.543650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:20:31.673 [2024-11-25 12:18:32.543659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.673 [2024-11-25 12:18:32.543684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.673 [2024-11-25 12:18:32.543694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:31.673 [2024-11-25 12:18:32.543701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:31.673 [2024-11-25 12:18:32.543714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.673 [2024-11-25 12:18:32.543743] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:31.673 [2024-11-25 12:18:32.543756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.674 [2024-11-25 12:18:32.543763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:31.674 [2024-11-25 12:18:32.543775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:31.674 [2024-11-25 12:18:32.543782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.674 [2024-11-25 12:18:32.567369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.674 [2024-11-25 12:18:32.567418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:31.674 [2024-11-25 12:18:32.567433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.558 ms 00:20:31.674 [2024-11-25 12:18:32.567441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.674 [2024-11-25 12:18:32.567544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.674 [2024-11-25 12:18:32.567554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:31.674 [2024-11-25 12:18:32.567565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:20:31.674 [2024-11-25 12:18:32.567575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.674 [2024-11-25 12:18:32.568648] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:31.674 [2024-11-25 12:18:32.572042] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 284.383 ms, result 0 00:20:31.674 [2024-11-25 12:18:32.572900] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:31.674 Some configs were skipped because the RPC state that can call them passed over. 00:20:31.674 12:18:32 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:20:31.930 [2024-11-25 12:18:32.799433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.930 [2024-11-25 12:18:32.799497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:31.930 [2024-11-25 12:18:32.799510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.514 ms 00:20:31.930 [2024-11-25 12:18:32.799521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.930 [2024-11-25 12:18:32.799553] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.639 ms, result 0 00:20:31.930 true 00:20:31.930 12:18:32 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:20:31.930 [2024-11-25 12:18:32.995425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.930 [2024-11-25 12:18:32.995484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:31.930 [2024-11-25 12:18:32.995498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.214 ms 00:20:31.930 [2024-11-25 12:18:32.995506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.930 [2024-11-25 12:18:32.995544] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.338 ms, result 0 00:20:31.930 true 00:20:32.262 12:18:33 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 76743 00:20:32.262 12:18:33 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76743 ']' 00:20:32.262 12:18:33 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76743 00:20:32.262 12:18:33 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:20:32.262 12:18:33 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:32.262 12:18:33 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76743 00:20:32.262 killing process with pid 76743 00:20:32.262 12:18:33 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:32.262 12:18:33 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:32.262 12:18:33 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76743' 00:20:32.262 12:18:33 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76743 00:20:32.262 12:18:33 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76743 00:20:32.830 [2024-11-25 12:18:33.748303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.830 [2024-11-25 12:18:33.748367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:32.830 [2024-11-25 12:18:33.748380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:32.830 [2024-11-25 12:18:33.748390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.830 [2024-11-25 12:18:33.748412] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:32.830 [2024-11-25 12:18:33.751036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.830 [2024-11-25 12:18:33.751070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:32.830 [2024-11-25 12:18:33.751089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.605 ms 00:20:32.830 [2024-11-25 12:18:33.751098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.830 [2024-11-25 12:18:33.751390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.830 [2024-11-25 12:18:33.751400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:32.830 [2024-11-25 12:18:33.751410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:20:32.830 [2024-11-25 12:18:33.751417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.830 [2024-11-25 12:18:33.755495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.830 [2024-11-25 12:18:33.755529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:32.830 [2024-11-25 12:18:33.755542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.056 ms 00:20:32.830 [2024-11-25 12:18:33.755550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.830 [2024-11-25 12:18:33.762503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.830 [2024-11-25 12:18:33.762539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:32.830 [2024-11-25 12:18:33.762555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.916 ms 00:20:32.830 [2024-11-25 12:18:33.762564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.830 [2024-11-25 12:18:33.772274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.830 [2024-11-25 12:18:33.772317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:32.830 [2024-11-25 12:18:33.772332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.651 ms 00:20:32.830 [2024-11-25 12:18:33.772347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.830 [2024-11-25 12:18:33.779472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.830 [2024-11-25 12:18:33.779513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:32.830 [2024-11-25 12:18:33.779529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.081 ms 00:20:32.830 [2024-11-25 12:18:33.779538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.830 [2024-11-25 12:18:33.779697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.830 [2024-11-25 12:18:33.779707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:32.830 [2024-11-25 12:18:33.779718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:20:32.830 [2024-11-25 12:18:33.779726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.830 [2024-11-25 12:18:33.789488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.830 [2024-11-25 12:18:33.789528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:32.830 [2024-11-25 12:18:33.789541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.736 ms 00:20:32.830 [2024-11-25 12:18:33.789548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.830 [2024-11-25 12:18:33.799080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.830 [2024-11-25 12:18:33.799120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:32.830 [2024-11-25 12:18:33.799134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.490 ms 00:20:32.830 [2024-11-25 12:18:33.799142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.830 [2024-11-25 12:18:33.808095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.830 [2024-11-25 12:18:33.808137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:32.830 [2024-11-25 12:18:33.808150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.907 ms 00:20:32.830 [2024-11-25 12:18:33.808158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.830 [2024-11-25 12:18:33.817248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.830 [2024-11-25 12:18:33.817291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:32.830 [2024-11-25 12:18:33.817304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.018 ms 00:20:32.830 [2024-11-25 12:18:33.817312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.830 [2024-11-25 12:18:33.817378] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:32.830 [2024-11-25 12:18:33.817393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:32.830 [2024-11-25 12:18:33.817406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:32.830 [2024-11-25 12:18:33.817413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:32.830 [2024-11-25 12:18:33.817422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:32.830 [2024-11-25 12:18:33.817430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:32.830 [2024-11-25 12:18:33.817441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:32.830 [2024-11-25 12:18:33.817449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:32.830 [2024-11-25 12:18:33.817458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:32.830 [2024-11-25 12:18:33.817465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:32.830 [2024-11-25 12:18:33.817475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:32.830 [2024-11-25 12:18:33.817482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:32.830 [2024-11-25 12:18:33.817491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:32.830 [2024-11-25 12:18:33.817499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:32.830 [2024-11-25 12:18:33.817508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:32.830 [2024-11-25 12:18:33.817516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:32.830 [2024-11-25 12:18:33.817525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:32.830 [2024-11-25 12:18:33.817532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:32.830 [2024-11-25 12:18:33.817543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:32.830 [2024-11-25 12:18:33.817550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:32.830 [2024-11-25 12:18:33.817559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:32.830 [2024-11-25 12:18:33.817567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:32.830 [2024-11-25 12:18:33.817577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:32.830 [2024-11-25 12:18:33.817584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:32.830 [2024-11-25 12:18:33.817593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:32.830 [2024-11-25 12:18:33.817600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:32.830 [2024-11-25 12:18:33.817609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:32.830 [2024-11-25 12:18:33.817616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:32.830 [2024-11-25 12:18:33.817625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:32.830 [2024-11-25 12:18:33.817632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:32.830 [2024-11-25 12:18:33.817642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:32.830 [2024-11-25 12:18:33.817649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:32.830 [2024-11-25 12:18:33.817658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:32.830 [2024-11-25 12:18:33.817665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:32.830 [2024-11-25 12:18:33.817674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:32.830 [2024-11-25 12:18:33.817681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:32.830 [2024-11-25 12:18:33.817690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:32.830 [2024-11-25 12:18:33.817697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:32.831 [2024-11-25 12:18:33.817707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:32.831 [2024-11-25 12:18:33.817714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:32.831 [2024-11-25 12:18:33.817723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:32.831 [2024-11-25 12:18:33.817730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:32.831 [2024-11-25 12:18:33.817739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:32.831 [2024-11-25 12:18:33.817746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:32.831 [2024-11-25 12:18:33.817755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:32.831 [2024-11-25 12:18:33.817763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:32.831 [2024-11-25 12:18:33.817771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:32.831 [2024-11-25 12:18:33.817780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:32.831 [2024-11-25 12:18:33.817789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:32.831 [2024-11-25 12:18:33.817796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:32.831 [2024-11-25 12:18:33.817804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:32.831 [2024-11-25 12:18:33.817812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:32.831 [2024-11-25 12:18:33.817821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:32.831 [2024-11-25 12:18:33.817828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:32.831 [2024-11-25 12:18:33.817838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:32.831 [2024-11-25 12:18:33.817845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:32.831 [2024-11-25 12:18:33.817854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:32.831 [2024-11-25 12:18:33.817862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:32.831 [2024-11-25 12:18:33.817871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:32.831 [2024-11-25 12:18:33.817878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:32.831 [2024-11-25 12:18:33.817886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:32.831 [2024-11-25 12:18:33.817893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:32.831 [2024-11-25 12:18:33.817902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:32.831 [2024-11-25 12:18:33.817909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:32.831 [2024-11-25 12:18:33.817918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:32.831 [2024-11-25 12:18:33.817925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:32.831 [2024-11-25 12:18:33.817934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:32.831 [2024-11-25 12:18:33.817941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:32.831 [2024-11-25 12:18:33.817961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:32.831 [2024-11-25 12:18:33.817969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:32.831 [2024-11-25 12:18:33.817981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:32.831 [2024-11-25 12:18:33.817988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:32.831 [2024-11-25 12:18:33.817997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:32.831 [2024-11-25 12:18:33.818005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:32.831 [2024-11-25 12:18:33.818015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:32.831 [2024-11-25 12:18:33.818022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:32.831 [2024-11-25 12:18:33.818031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:32.831 [2024-11-25 12:18:33.818038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:32.831 [2024-11-25 12:18:33.818047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:32.831 [2024-11-25 12:18:33.818055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:32.831 [2024-11-25 12:18:33.818064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:32.831 [2024-11-25 12:18:33.818072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:32.831 [2024-11-25 12:18:33.818081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:32.831 [2024-11-25 12:18:33.818088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:32.831 [2024-11-25 12:18:33.818097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:32.831 [2024-11-25 12:18:33.818105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:32.831 [2024-11-25 12:18:33.818115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:32.831 [2024-11-25 12:18:33.818123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:32.831 [2024-11-25 12:18:33.818132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:32.831 [2024-11-25 12:18:33.818139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:32.831 [2024-11-25 12:18:33.818147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:32.831 [2024-11-25 12:18:33.818154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:32.831 [2024-11-25 12:18:33.818163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:32.831 [2024-11-25 12:18:33.818170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:32.831 [2024-11-25 12:18:33.818180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:32.831 [2024-11-25 12:18:33.818187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:32.831 [2024-11-25 12:18:33.818195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:32.831 [2024-11-25 12:18:33.818203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:32.831 [2024-11-25 12:18:33.818212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:32.831 [2024-11-25 12:18:33.818220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:32.831 [2024-11-25 12:18:33.818229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:32.831 [2024-11-25 12:18:33.818245] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:32.831 [2024-11-25 12:18:33.818258] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c35f6ec4-de26-452c-bcbe-87dd6023e02d 00:20:32.831 [2024-11-25 12:18:33.818273] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:32.831 [2024-11-25 12:18:33.818284] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:32.831 [2024-11-25 12:18:33.818291] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:32.831 [2024-11-25 12:18:33.818300] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:32.831 [2024-11-25 12:18:33.818307] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:32.831 [2024-11-25 12:18:33.818316] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:32.831 [2024-11-25 12:18:33.818323] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:32.831 [2024-11-25 12:18:33.818331] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:32.831 [2024-11-25 12:18:33.818338] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:32.831 [2024-11-25 12:18:33.818347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.831 [2024-11-25 12:18:33.818354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:32.831 [2024-11-25 12:18:33.818364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.971 ms 00:20:32.831 [2024-11-25 12:18:33.818371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.831 [2024-11-25 12:18:33.830820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.831 [2024-11-25 12:18:33.830866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:32.831 [2024-11-25 12:18:33.830882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.423 ms 00:20:32.831 [2024-11-25 12:18:33.830891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.831 [2024-11-25 12:18:33.831300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.831 [2024-11-25 12:18:33.831325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:32.831 [2024-11-25 12:18:33.831336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:20:32.831 [2024-11-25 12:18:33.831346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.831 [2024-11-25 12:18:33.874812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.831 [2024-11-25 12:18:33.874865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:32.831 [2024-11-25 12:18:33.874879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.831 [2024-11-25 12:18:33.874887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.831 [2024-11-25 12:18:33.875019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.831 [2024-11-25 12:18:33.875030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:32.831 [2024-11-25 12:18:33.875040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.831 [2024-11-25 12:18:33.875049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.831 [2024-11-25 12:18:33.875096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.832 [2024-11-25 12:18:33.875104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:32.832 [2024-11-25 12:18:33.875116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.832 [2024-11-25 12:18:33.875123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.832 [2024-11-25 12:18:33.875141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.832 [2024-11-25 12:18:33.875149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:32.832 [2024-11-25 12:18:33.875158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.832 [2024-11-25 12:18:33.875165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.090 [2024-11-25 12:18:33.952768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.090 [2024-11-25 12:18:33.952819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:33.090 [2024-11-25 12:18:33.952832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.090 [2024-11-25 12:18:33.952840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.090 [2024-11-25 12:18:34.017265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.090 [2024-11-25 12:18:34.017315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:33.090 [2024-11-25 12:18:34.017328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.090 [2024-11-25 12:18:34.017339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.090 [2024-11-25 12:18:34.017439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.090 [2024-11-25 12:18:34.017449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:33.090 [2024-11-25 12:18:34.017462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.090 [2024-11-25 12:18:34.017469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.090 [2024-11-25 12:18:34.017497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.090 [2024-11-25 12:18:34.017505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:33.090 [2024-11-25 12:18:34.017514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.090 [2024-11-25 12:18:34.017521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.090 [2024-11-25 12:18:34.017614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.090 [2024-11-25 12:18:34.017623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:33.090 [2024-11-25 12:18:34.017633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.090 [2024-11-25 12:18:34.017640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.090 [2024-11-25 12:18:34.017671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.090 [2024-11-25 12:18:34.017680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:33.090 [2024-11-25 12:18:34.017689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.090 [2024-11-25 12:18:34.017696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.090 [2024-11-25 12:18:34.017731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.090 [2024-11-25 12:18:34.017741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:33.090 [2024-11-25 12:18:34.017752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.090 [2024-11-25 12:18:34.017759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.090 [2024-11-25 12:18:34.017800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.090 [2024-11-25 12:18:34.017809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:33.090 [2024-11-25 12:18:34.017819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.090 [2024-11-25 12:18:34.017826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.090 [2024-11-25 12:18:34.017974] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 269.627 ms, result 0 00:20:33.655 12:18:34 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:33.655 12:18:34 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:33.913 [2024-11-25 12:18:34.742758] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:20:33.913 [2024-11-25 12:18:34.742883] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76802 ] 00:20:33.913 [2024-11-25 12:18:34.897851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.171 [2024-11-25 12:18:34.996899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:34.431 [2024-11-25 12:18:35.250360] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:34.431 [2024-11-25 12:18:35.250423] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:34.431 [2024-11-25 12:18:35.405193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.431 [2024-11-25 12:18:35.405252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:34.431 [2024-11-25 12:18:35.405266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:34.431 [2024-11-25 12:18:35.405274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.431 [2024-11-25 12:18:35.407935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.431 [2024-11-25 12:18:35.407985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:34.431 [2024-11-25 12:18:35.407996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.643 ms 00:20:34.431 [2024-11-25 12:18:35.408003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.431 [2024-11-25 12:18:35.408077] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:34.431 [2024-11-25 12:18:35.408736] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:34.431 [2024-11-25 12:18:35.408762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.431 [2024-11-25 12:18:35.408770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:34.431 [2024-11-25 12:18:35.408778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.692 ms 00:20:34.431 [2024-11-25 12:18:35.408786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.431 [2024-11-25 12:18:35.410074] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:34.431 [2024-11-25 12:18:35.422486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.431 [2024-11-25 12:18:35.422536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:34.431 [2024-11-25 12:18:35.422549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.413 ms 00:20:34.431 [2024-11-25 12:18:35.422557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.431 [2024-11-25 12:18:35.422663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.431 [2024-11-25 12:18:35.422674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:34.431 [2024-11-25 12:18:35.422683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:20:34.431 [2024-11-25 12:18:35.422691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.431 [2024-11-25 12:18:35.427746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.431 [2024-11-25 12:18:35.427784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:34.431 [2024-11-25 12:18:35.427793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.013 ms 00:20:34.431 [2024-11-25 12:18:35.427801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.431 [2024-11-25 12:18:35.427896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.431 [2024-11-25 12:18:35.427905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:34.431 [2024-11-25 12:18:35.427914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:20:34.431 [2024-11-25 12:18:35.427921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.431 [2024-11-25 12:18:35.427961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.431 [2024-11-25 12:18:35.427973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:34.431 [2024-11-25 12:18:35.427980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:20:34.431 [2024-11-25 12:18:35.427987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.431 [2024-11-25 12:18:35.428010] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:34.431 [2024-11-25 12:18:35.431256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.431 [2024-11-25 12:18:35.431287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:34.431 [2024-11-25 12:18:35.431297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.253 ms 00:20:34.431 [2024-11-25 12:18:35.431304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.431 [2024-11-25 12:18:35.431346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.431 [2024-11-25 12:18:35.431355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:34.431 [2024-11-25 12:18:35.431363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:20:34.431 [2024-11-25 12:18:35.431371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.431 [2024-11-25 12:18:35.431388] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:34.431 [2024-11-25 12:18:35.431407] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:34.431 [2024-11-25 12:18:35.431441] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:34.431 [2024-11-25 12:18:35.431456] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:34.431 [2024-11-25 12:18:35.431559] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:34.431 [2024-11-25 12:18:35.431577] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:34.431 [2024-11-25 12:18:35.431588] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:34.431 [2024-11-25 12:18:35.431598] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:34.432 [2024-11-25 12:18:35.431610] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:34.432 [2024-11-25 12:18:35.431618] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:34.432 [2024-11-25 12:18:35.431625] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:34.432 [2024-11-25 12:18:35.431632] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:34.432 [2024-11-25 12:18:35.431639] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:34.432 [2024-11-25 12:18:35.431647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.432 [2024-11-25 12:18:35.431655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:34.432 [2024-11-25 12:18:35.431662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:20:34.432 [2024-11-25 12:18:35.431669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.432 [2024-11-25 12:18:35.431757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.432 [2024-11-25 12:18:35.431765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:34.432 [2024-11-25 12:18:35.431775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:20:34.432 [2024-11-25 12:18:35.431782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.432 [2024-11-25 12:18:35.431881] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:34.432 [2024-11-25 12:18:35.431892] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:34.432 [2024-11-25 12:18:35.431900] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:34.432 [2024-11-25 12:18:35.431908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:34.432 [2024-11-25 12:18:35.431916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:34.432 [2024-11-25 12:18:35.431922] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:34.432 [2024-11-25 12:18:35.431929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:34.432 [2024-11-25 12:18:35.431936] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:34.432 [2024-11-25 12:18:35.431943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:34.432 [2024-11-25 12:18:35.431961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:34.432 [2024-11-25 12:18:35.431968] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:34.432 [2024-11-25 12:18:35.431975] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:34.432 [2024-11-25 12:18:35.431981] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:34.432 [2024-11-25 12:18:35.431994] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:34.432 [2024-11-25 12:18:35.432001] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:34.432 [2024-11-25 12:18:35.432008] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:34.432 [2024-11-25 12:18:35.432016] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:34.432 [2024-11-25 12:18:35.432023] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:34.432 [2024-11-25 12:18:35.432029] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:34.432 [2024-11-25 12:18:35.432037] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:34.432 [2024-11-25 12:18:35.432044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:34.432 [2024-11-25 12:18:35.432051] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:34.432 [2024-11-25 12:18:35.432058] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:34.432 [2024-11-25 12:18:35.432064] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:34.432 [2024-11-25 12:18:35.432071] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:34.432 [2024-11-25 12:18:35.432077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:34.432 [2024-11-25 12:18:35.432084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:34.432 [2024-11-25 12:18:35.432091] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:34.432 [2024-11-25 12:18:35.432098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:34.432 [2024-11-25 12:18:35.432104] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:34.432 [2024-11-25 12:18:35.432111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:34.432 [2024-11-25 12:18:35.432117] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:34.432 [2024-11-25 12:18:35.432124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:34.432 [2024-11-25 12:18:35.432131] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:34.432 [2024-11-25 12:18:35.432138] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:34.432 [2024-11-25 12:18:35.432144] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:34.432 [2024-11-25 12:18:35.432151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:34.432 [2024-11-25 12:18:35.432157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:34.432 [2024-11-25 12:18:35.432164] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:34.432 [2024-11-25 12:18:35.432170] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:34.432 [2024-11-25 12:18:35.432176] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:34.432 [2024-11-25 12:18:35.432183] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:34.432 [2024-11-25 12:18:35.432190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:34.432 [2024-11-25 12:18:35.432196] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:34.432 [2024-11-25 12:18:35.432203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:34.432 [2024-11-25 12:18:35.432210] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:34.432 [2024-11-25 12:18:35.432220] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:34.432 [2024-11-25 12:18:35.432227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:34.432 [2024-11-25 12:18:35.432234] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:34.432 [2024-11-25 12:18:35.432240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:34.432 [2024-11-25 12:18:35.432247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:34.432 [2024-11-25 12:18:35.432256] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:34.432 [2024-11-25 12:18:35.432262] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:34.432 [2024-11-25 12:18:35.432270] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:34.432 [2024-11-25 12:18:35.432279] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:34.432 [2024-11-25 12:18:35.432287] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:34.432 [2024-11-25 12:18:35.432294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:34.432 [2024-11-25 12:18:35.432301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:34.432 [2024-11-25 12:18:35.432308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:34.432 [2024-11-25 12:18:35.432315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:34.432 [2024-11-25 12:18:35.432322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:34.432 [2024-11-25 12:18:35.432329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:34.432 [2024-11-25 12:18:35.432337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:34.432 [2024-11-25 12:18:35.432344] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:34.432 [2024-11-25 12:18:35.432351] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:34.432 [2024-11-25 12:18:35.432358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:34.432 [2024-11-25 12:18:35.432365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:34.432 [2024-11-25 12:18:35.432372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:34.432 [2024-11-25 12:18:35.432379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:34.433 [2024-11-25 12:18:35.432386] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:34.433 [2024-11-25 12:18:35.432394] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:34.433 [2024-11-25 12:18:35.432402] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:34.433 [2024-11-25 12:18:35.432409] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:34.433 [2024-11-25 12:18:35.432415] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:34.433 [2024-11-25 12:18:35.432422] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:34.433 [2024-11-25 12:18:35.432430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.433 [2024-11-25 12:18:35.432437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:34.433 [2024-11-25 12:18:35.432446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.616 ms 00:20:34.433 [2024-11-25 12:18:35.432453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.433 [2024-11-25 12:18:35.458229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.433 [2024-11-25 12:18:35.458277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:34.433 [2024-11-25 12:18:35.458290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.709 ms 00:20:34.433 [2024-11-25 12:18:35.458297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.433 [2024-11-25 12:18:35.458435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.433 [2024-11-25 12:18:35.458449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:34.433 [2024-11-25 12:18:35.458457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:20:34.433 [2024-11-25 12:18:35.458465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.692 [2024-11-25 12:18:35.508613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.692 [2024-11-25 12:18:35.508670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:34.692 [2024-11-25 12:18:35.508684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.125 ms 00:20:34.692 [2024-11-25 12:18:35.508695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.692 [2024-11-25 12:18:35.508817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.692 [2024-11-25 12:18:35.508829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:34.692 [2024-11-25 12:18:35.508838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:34.692 [2024-11-25 12:18:35.508845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.692 [2024-11-25 12:18:35.509192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.692 [2024-11-25 12:18:35.509217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:34.692 [2024-11-25 12:18:35.509226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.326 ms 00:20:34.692 [2024-11-25 12:18:35.509240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.692 [2024-11-25 12:18:35.509386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.692 [2024-11-25 12:18:35.509406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:34.692 [2024-11-25 12:18:35.509414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:20:34.692 [2024-11-25 12:18:35.509422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.692 [2024-11-25 12:18:35.522941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.692 [2024-11-25 12:18:35.522994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:34.692 [2024-11-25 12:18:35.523005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.498 ms 00:20:34.692 [2024-11-25 12:18:35.523013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.692 [2024-11-25 12:18:35.535281] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:34.692 [2024-11-25 12:18:35.535326] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:34.692 [2024-11-25 12:18:35.535340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.692 [2024-11-25 12:18:35.535348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:34.692 [2024-11-25 12:18:35.535358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.205 ms 00:20:34.692 [2024-11-25 12:18:35.535365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.692 [2024-11-25 12:18:35.560409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.692 [2024-11-25 12:18:35.560487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:34.692 [2024-11-25 12:18:35.560500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.946 ms 00:20:34.692 [2024-11-25 12:18:35.560508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.692 [2024-11-25 12:18:35.573265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.692 [2024-11-25 12:18:35.573321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:34.692 [2024-11-25 12:18:35.573334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.631 ms 00:20:34.692 [2024-11-25 12:18:35.573341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.692 [2024-11-25 12:18:35.585116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.692 [2024-11-25 12:18:35.585171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:34.692 [2024-11-25 12:18:35.585183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.660 ms 00:20:34.692 [2024-11-25 12:18:35.585191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.692 [2024-11-25 12:18:35.585858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.692 [2024-11-25 12:18:35.585885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:34.692 [2024-11-25 12:18:35.585896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.538 ms 00:20:34.692 [2024-11-25 12:18:35.585903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.692 [2024-11-25 12:18:35.643787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.692 [2024-11-25 12:18:35.643848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:34.692 [2024-11-25 12:18:35.643862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.857 ms 00:20:34.692 [2024-11-25 12:18:35.643870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.692 [2024-11-25 12:18:35.654890] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:34.692 [2024-11-25 12:18:35.669618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.692 [2024-11-25 12:18:35.669668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:34.692 [2024-11-25 12:18:35.669682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.594 ms 00:20:34.692 [2024-11-25 12:18:35.669690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.692 [2024-11-25 12:18:35.669790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.692 [2024-11-25 12:18:35.669801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:34.692 [2024-11-25 12:18:35.669811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:34.692 [2024-11-25 12:18:35.669819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.692 [2024-11-25 12:18:35.669870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.692 [2024-11-25 12:18:35.669880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:34.692 [2024-11-25 12:18:35.669888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:20:34.692 [2024-11-25 12:18:35.669896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.692 [2024-11-25 12:18:35.669920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.692 [2024-11-25 12:18:35.669930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:34.692 [2024-11-25 12:18:35.669937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:34.692 [2024-11-25 12:18:35.669964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.692 [2024-11-25 12:18:35.669997] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:34.692 [2024-11-25 12:18:35.670007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.692 [2024-11-25 12:18:35.670014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:34.692 [2024-11-25 12:18:35.670022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:34.692 [2024-11-25 12:18:35.670028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.692 [2024-11-25 12:18:35.694499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.692 [2024-11-25 12:18:35.694552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:34.692 [2024-11-25 12:18:35.694565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.450 ms 00:20:34.692 [2024-11-25 12:18:35.694573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.692 [2024-11-25 12:18:35.694704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.692 [2024-11-25 12:18:35.694716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:34.692 [2024-11-25 12:18:35.694725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:20:34.692 [2024-11-25 12:18:35.694732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.692 [2024-11-25 12:18:35.695606] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:34.692 [2024-11-25 12:18:35.699107] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 290.123 ms, result 0 00:20:34.692 [2024-11-25 12:18:35.699883] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:34.692 [2024-11-25 12:18:35.713387] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:36.065  [2024-11-25T12:18:38.078Z] Copying: 44/256 [MB] (44 MBps) [2024-11-25T12:18:39.012Z] Copying: 90/256 [MB] (45 MBps) [2024-11-25T12:18:39.945Z] Copying: 133/256 [MB] (43 MBps) [2024-11-25T12:18:40.879Z] Copying: 176/256 [MB] (43 MBps) [2024-11-25T12:18:41.813Z] Copying: 218/256 [MB] (42 MBps) [2024-11-25T12:18:41.813Z] Copying: 256/256 [MB] (average 43 MBps)[2024-11-25 12:18:41.595360] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:40.733 [2024-11-25 12:18:41.602974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.733 [2024-11-25 12:18:41.603017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:40.733 [2024-11-25 12:18:41.603030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:40.733 [2024-11-25 12:18:41.603042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.733 [2024-11-25 12:18:41.603062] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:40.733 [2024-11-25 12:18:41.605180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.733 [2024-11-25 12:18:41.605210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:40.733 [2024-11-25 12:18:41.605219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.108 ms 00:20:40.733 [2024-11-25 12:18:41.605226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.733 [2024-11-25 12:18:41.605476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.733 [2024-11-25 12:18:41.605494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:40.733 [2024-11-25 12:18:41.605502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.231 ms 00:20:40.733 [2024-11-25 12:18:41.605508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.733 [2024-11-25 12:18:41.608378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.733 [2024-11-25 12:18:41.608400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:40.733 [2024-11-25 12:18:41.608408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.857 ms 00:20:40.733 [2024-11-25 12:18:41.608415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.733 [2024-11-25 12:18:41.613974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.733 [2024-11-25 12:18:41.613997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:40.733 [2024-11-25 12:18:41.614006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.544 ms 00:20:40.733 [2024-11-25 12:18:41.614012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.733 [2024-11-25 12:18:41.633273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.733 [2024-11-25 12:18:41.633317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:40.733 [2024-11-25 12:18:41.633328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.210 ms 00:20:40.733 [2024-11-25 12:18:41.633334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.733 [2024-11-25 12:18:41.644735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.733 [2024-11-25 12:18:41.644784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:40.733 [2024-11-25 12:18:41.644797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.353 ms 00:20:40.733 [2024-11-25 12:18:41.644807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.733 [2024-11-25 12:18:41.644922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.733 [2024-11-25 12:18:41.644930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:40.733 [2024-11-25 12:18:41.644937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:20:40.733 [2024-11-25 12:18:41.644943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.733 [2024-11-25 12:18:41.663650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.733 [2024-11-25 12:18:41.663689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:40.733 [2024-11-25 12:18:41.663700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.677 ms 00:20:40.733 [2024-11-25 12:18:41.663707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.733 [2024-11-25 12:18:41.681712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.733 [2024-11-25 12:18:41.681755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:40.733 [2024-11-25 12:18:41.681765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.957 ms 00:20:40.734 [2024-11-25 12:18:41.681771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.734 [2024-11-25 12:18:41.699229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.734 [2024-11-25 12:18:41.699282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:40.734 [2024-11-25 12:18:41.699294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.419 ms 00:20:40.734 [2024-11-25 12:18:41.699300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.734 [2024-11-25 12:18:41.716826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.734 [2024-11-25 12:18:41.716865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:40.734 [2024-11-25 12:18:41.716875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.466 ms 00:20:40.734 [2024-11-25 12:18:41.716882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.734 [2024-11-25 12:18:41.716919] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:40.734 [2024-11-25 12:18:41.716931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:40.734 [2024-11-25 12:18:41.716940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:40.734 [2024-11-25 12:18:41.716954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:40.734 [2024-11-25 12:18:41.716961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:40.734 [2024-11-25 12:18:41.716967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:40.734 [2024-11-25 12:18:41.716974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:40.734 [2024-11-25 12:18:41.716980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:40.734 [2024-11-25 12:18:41.716986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:40.734 [2024-11-25 12:18:41.716993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:40.734 [2024-11-25 12:18:41.716999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:40.734 [2024-11-25 12:18:41.717005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:40.734 [2024-11-25 12:18:41.717011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:40.734 [2024-11-25 12:18:41.717017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:40.734 [2024-11-25 12:18:41.717023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:40.734 [2024-11-25 12:18:41.717029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:40.734 [2024-11-25 12:18:41.717035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:40.734 [2024-11-25 12:18:41.717041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:40.734 [2024-11-25 12:18:41.717048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:40.734 [2024-11-25 12:18:41.717054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:40.734 [2024-11-25 12:18:41.717060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:40.734 [2024-11-25 12:18:41.717066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:40.734 [2024-11-25 12:18:41.717072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:40.734 [2024-11-25 12:18:41.717078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:40.734 [2024-11-25 12:18:41.717083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:40.734 [2024-11-25 12:18:41.717089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:40.734 [2024-11-25 12:18:41.717095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:40.734 [2024-11-25 12:18:41.717101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:40.734 [2024-11-25 12:18:41.717107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:40.734 [2024-11-25 12:18:41.717113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:40.734 [2024-11-25 12:18:41.717119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:40.734 [2024-11-25 12:18:41.717125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:40.734 [2024-11-25 12:18:41.717131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:40.734 [2024-11-25 12:18:41.717137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:40.734 [2024-11-25 12:18:41.717143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:40.734 [2024-11-25 12:18:41.717149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:40.734 [2024-11-25 12:18:41.717154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:40.734 [2024-11-25 12:18:41.717160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:40.734 [2024-11-25 12:18:41.717166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:40.734 [2024-11-25 12:18:41.717172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:40.734 [2024-11-25 12:18:41.717177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:40.734 [2024-11-25 12:18:41.717183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:40.734 [2024-11-25 12:18:41.717189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:40.734 [2024-11-25 12:18:41.717195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:40.734 [2024-11-25 12:18:41.717201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:40.734 [2024-11-25 12:18:41.717206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:40.734 [2024-11-25 12:18:41.717212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:40.734 [2024-11-25 12:18:41.717217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:40.734 [2024-11-25 12:18:41.717223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:40.734 [2024-11-25 12:18:41.717228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:40.734 [2024-11-25 12:18:41.717234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:40.734 [2024-11-25 12:18:41.717240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:40.734 [2024-11-25 12:18:41.717246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:40.734 [2024-11-25 12:18:41.717252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:40.734 [2024-11-25 12:18:41.717257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:40.734 [2024-11-25 12:18:41.717263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:40.734 [2024-11-25 12:18:41.717268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:40.734 [2024-11-25 12:18:41.717274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:40.734 [2024-11-25 12:18:41.717279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:40.734 [2024-11-25 12:18:41.717287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:40.734 [2024-11-25 12:18:41.717293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:40.734 [2024-11-25 12:18:41.717299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:40.734 [2024-11-25 12:18:41.717304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:40.734 [2024-11-25 12:18:41.717310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:40.734 [2024-11-25 12:18:41.717317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:40.735 [2024-11-25 12:18:41.717323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:40.735 [2024-11-25 12:18:41.717329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:40.735 [2024-11-25 12:18:41.717335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:40.735 [2024-11-25 12:18:41.717341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:40.735 [2024-11-25 12:18:41.717347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:40.735 [2024-11-25 12:18:41.717358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:40.735 [2024-11-25 12:18:41.717364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:40.735 [2024-11-25 12:18:41.717376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:40.735 [2024-11-25 12:18:41.717383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:40.735 [2024-11-25 12:18:41.717389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:40.735 [2024-11-25 12:18:41.717395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:40.735 [2024-11-25 12:18:41.717401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:40.735 [2024-11-25 12:18:41.717407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:40.735 [2024-11-25 12:18:41.717413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:40.735 [2024-11-25 12:18:41.717420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:40.735 [2024-11-25 12:18:41.717426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:40.735 [2024-11-25 12:18:41.717432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:40.735 [2024-11-25 12:18:41.717438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:40.735 [2024-11-25 12:18:41.717445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:40.735 [2024-11-25 12:18:41.717451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:40.735 [2024-11-25 12:18:41.717457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:40.735 [2024-11-25 12:18:41.717463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:40.735 [2024-11-25 12:18:41.717469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:40.735 [2024-11-25 12:18:41.717475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:40.735 [2024-11-25 12:18:41.717482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:40.735 [2024-11-25 12:18:41.717487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:40.735 [2024-11-25 12:18:41.717493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:40.735 [2024-11-25 12:18:41.717499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:40.735 [2024-11-25 12:18:41.717505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:40.735 [2024-11-25 12:18:41.717512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:40.735 [2024-11-25 12:18:41.717518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:40.735 [2024-11-25 12:18:41.717532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:40.735 [2024-11-25 12:18:41.717539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:40.735 [2024-11-25 12:18:41.717545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:40.735 [2024-11-25 12:18:41.717551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:40.735 [2024-11-25 12:18:41.717558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:40.735 [2024-11-25 12:18:41.717570] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:40.735 [2024-11-25 12:18:41.717577] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c35f6ec4-de26-452c-bcbe-87dd6023e02d 00:20:40.735 [2024-11-25 12:18:41.717583] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:40.735 [2024-11-25 12:18:41.717589] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:40.735 [2024-11-25 12:18:41.717595] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:40.735 [2024-11-25 12:18:41.717601] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:40.735 [2024-11-25 12:18:41.717607] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:40.735 [2024-11-25 12:18:41.717614] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:40.735 [2024-11-25 12:18:41.717619] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:40.735 [2024-11-25 12:18:41.717624] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:40.735 [2024-11-25 12:18:41.717630] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:40.735 [2024-11-25 12:18:41.717635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.735 [2024-11-25 12:18:41.717645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:40.735 [2024-11-25 12:18:41.717652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.717 ms 00:20:40.735 [2024-11-25 12:18:41.717658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.735 [2024-11-25 12:18:41.727782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.735 [2024-11-25 12:18:41.727825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:40.735 [2024-11-25 12:18:41.727835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.109 ms 00:20:40.735 [2024-11-25 12:18:41.727841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.735 [2024-11-25 12:18:41.728177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.735 [2024-11-25 12:18:41.728196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:40.735 [2024-11-25 12:18:41.728203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:20:40.735 [2024-11-25 12:18:41.728208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.735 [2024-11-25 12:18:41.756677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:40.735 [2024-11-25 12:18:41.756726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:40.735 [2024-11-25 12:18:41.756736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:40.735 [2024-11-25 12:18:41.756742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.735 [2024-11-25 12:18:41.756839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:40.735 [2024-11-25 12:18:41.756847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:40.735 [2024-11-25 12:18:41.756853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:40.735 [2024-11-25 12:18:41.756859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.735 [2024-11-25 12:18:41.756899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:40.735 [2024-11-25 12:18:41.756907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:40.735 [2024-11-25 12:18:41.756913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:40.735 [2024-11-25 12:18:41.756918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.735 [2024-11-25 12:18:41.756932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:40.735 [2024-11-25 12:18:41.756941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:40.735 [2024-11-25 12:18:41.756958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:40.735 [2024-11-25 12:18:41.756964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.994 [2024-11-25 12:18:41.819746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:40.994 [2024-11-25 12:18:41.819798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:40.994 [2024-11-25 12:18:41.819808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:40.994 [2024-11-25 12:18:41.819815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.994 [2024-11-25 12:18:41.870760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:40.994 [2024-11-25 12:18:41.870817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:40.994 [2024-11-25 12:18:41.870827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:40.994 [2024-11-25 12:18:41.870834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.994 [2024-11-25 12:18:41.870897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:40.994 [2024-11-25 12:18:41.870904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:40.994 [2024-11-25 12:18:41.870911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:40.994 [2024-11-25 12:18:41.870917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.994 [2024-11-25 12:18:41.870940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:40.994 [2024-11-25 12:18:41.870955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:40.994 [2024-11-25 12:18:41.870964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:40.995 [2024-11-25 12:18:41.870970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.995 [2024-11-25 12:18:41.871046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:40.995 [2024-11-25 12:18:41.871053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:40.995 [2024-11-25 12:18:41.871060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:40.995 [2024-11-25 12:18:41.871065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.995 [2024-11-25 12:18:41.871090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:40.995 [2024-11-25 12:18:41.871097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:40.995 [2024-11-25 12:18:41.871103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:40.995 [2024-11-25 12:18:41.871111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.995 [2024-11-25 12:18:41.871141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:40.995 [2024-11-25 12:18:41.871148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:40.995 [2024-11-25 12:18:41.871154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:40.995 [2024-11-25 12:18:41.871160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.995 [2024-11-25 12:18:41.871195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:40.995 [2024-11-25 12:18:41.871203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:40.995 [2024-11-25 12:18:41.871212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:40.995 [2024-11-25 12:18:41.871218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.995 [2024-11-25 12:18:41.871330] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 268.361 ms, result 0 00:20:41.561 00:20:41.561 00:20:41.561 12:18:42 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:20:41.561 12:18:42 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:42.127 12:18:42 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:42.127 [2024-11-25 12:18:43.035357] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:20:42.127 [2024-11-25 12:18:43.035478] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76891 ] 00:20:42.127 [2024-11-25 12:18:43.191414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.385 [2024-11-25 12:18:43.272991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:42.646 [2024-11-25 12:18:43.482563] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:42.646 [2024-11-25 12:18:43.482617] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:42.646 [2024-11-25 12:18:43.633199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.646 [2024-11-25 12:18:43.633251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:42.646 [2024-11-25 12:18:43.633261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:42.646 [2024-11-25 12:18:43.633268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.646 [2024-11-25 12:18:43.635323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.646 [2024-11-25 12:18:43.635354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:42.646 [2024-11-25 12:18:43.635362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.041 ms 00:20:42.646 [2024-11-25 12:18:43.635368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.646 [2024-11-25 12:18:43.635452] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:42.646 [2024-11-25 12:18:43.635987] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:42.646 [2024-11-25 12:18:43.636010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.646 [2024-11-25 12:18:43.636017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:42.646 [2024-11-25 12:18:43.636024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.564 ms 00:20:42.646 [2024-11-25 12:18:43.636030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.646 [2024-11-25 12:18:43.637049] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:42.646 [2024-11-25 12:18:43.646670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.646 [2024-11-25 12:18:43.646703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:42.646 [2024-11-25 12:18:43.646712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.622 ms 00:20:42.646 [2024-11-25 12:18:43.646718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.646 [2024-11-25 12:18:43.646793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.646 [2024-11-25 12:18:43.646802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:42.646 [2024-11-25 12:18:43.646809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:20:42.646 [2024-11-25 12:18:43.646815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.646 [2024-11-25 12:18:43.651306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.646 [2024-11-25 12:18:43.651334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:42.646 [2024-11-25 12:18:43.651343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.456 ms 00:20:42.646 [2024-11-25 12:18:43.651350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.646 [2024-11-25 12:18:43.651422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.646 [2024-11-25 12:18:43.651430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:42.646 [2024-11-25 12:18:43.651436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:20:42.646 [2024-11-25 12:18:43.651443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.646 [2024-11-25 12:18:43.651462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.646 [2024-11-25 12:18:43.651470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:42.646 [2024-11-25 12:18:43.651477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:42.646 [2024-11-25 12:18:43.651484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.646 [2024-11-25 12:18:43.651503] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:42.646 [2024-11-25 12:18:43.654323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.646 [2024-11-25 12:18:43.654347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:42.646 [2024-11-25 12:18:43.654355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.825 ms 00:20:42.646 [2024-11-25 12:18:43.654361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.646 [2024-11-25 12:18:43.654390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.646 [2024-11-25 12:18:43.654397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:42.646 [2024-11-25 12:18:43.654404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:42.646 [2024-11-25 12:18:43.654414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.646 [2024-11-25 12:18:43.654432] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:42.646 [2024-11-25 12:18:43.654453] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:42.646 [2024-11-25 12:18:43.654487] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:42.646 [2024-11-25 12:18:43.654503] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:42.646 [2024-11-25 12:18:43.654604] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:42.646 [2024-11-25 12:18:43.654613] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:42.647 [2024-11-25 12:18:43.654622] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:42.647 [2024-11-25 12:18:43.654630] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:42.647 [2024-11-25 12:18:43.654639] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:42.647 [2024-11-25 12:18:43.654646] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:42.647 [2024-11-25 12:18:43.654652] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:42.647 [2024-11-25 12:18:43.654658] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:42.647 [2024-11-25 12:18:43.654664] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:42.647 [2024-11-25 12:18:43.654670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.647 [2024-11-25 12:18:43.654676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:42.647 [2024-11-25 12:18:43.654683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.239 ms 00:20:42.647 [2024-11-25 12:18:43.654689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.647 [2024-11-25 12:18:43.654760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.647 [2024-11-25 12:18:43.654766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:42.647 [2024-11-25 12:18:43.654777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:20:42.647 [2024-11-25 12:18:43.654783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.647 [2024-11-25 12:18:43.654873] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:42.647 [2024-11-25 12:18:43.654890] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:42.647 [2024-11-25 12:18:43.654897] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:42.647 [2024-11-25 12:18:43.654904] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:42.647 [2024-11-25 12:18:43.654911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:42.647 [2024-11-25 12:18:43.654917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:42.647 [2024-11-25 12:18:43.654922] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:42.647 [2024-11-25 12:18:43.654928] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:42.647 [2024-11-25 12:18:43.654934] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:42.647 [2024-11-25 12:18:43.654939] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:42.647 [2024-11-25 12:18:43.654954] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:42.647 [2024-11-25 12:18:43.654960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:42.647 [2024-11-25 12:18:43.654967] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:42.647 [2024-11-25 12:18:43.654978] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:42.647 [2024-11-25 12:18:43.654983] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:42.647 [2024-11-25 12:18:43.654989] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:42.647 [2024-11-25 12:18:43.654994] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:42.647 [2024-11-25 12:18:43.654999] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:42.647 [2024-11-25 12:18:43.655005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:42.647 [2024-11-25 12:18:43.655010] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:42.647 [2024-11-25 12:18:43.655016] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:42.647 [2024-11-25 12:18:43.655021] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:42.647 [2024-11-25 12:18:43.655027] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:42.647 [2024-11-25 12:18:43.655032] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:42.647 [2024-11-25 12:18:43.655038] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:42.647 [2024-11-25 12:18:43.655043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:42.647 [2024-11-25 12:18:43.655049] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:42.647 [2024-11-25 12:18:43.655054] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:42.647 [2024-11-25 12:18:43.655059] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:42.647 [2024-11-25 12:18:43.655064] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:42.647 [2024-11-25 12:18:43.655069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:42.647 [2024-11-25 12:18:43.655075] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:42.647 [2024-11-25 12:18:43.655080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:42.647 [2024-11-25 12:18:43.655085] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:42.647 [2024-11-25 12:18:43.655090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:42.647 [2024-11-25 12:18:43.655095] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:42.647 [2024-11-25 12:18:43.655101] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:42.647 [2024-11-25 12:18:43.655106] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:42.647 [2024-11-25 12:18:43.655112] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:42.647 [2024-11-25 12:18:43.655117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:42.647 [2024-11-25 12:18:43.655123] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:42.647 [2024-11-25 12:18:43.655128] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:42.647 [2024-11-25 12:18:43.655133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:42.647 [2024-11-25 12:18:43.655138] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:42.647 [2024-11-25 12:18:43.655145] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:42.647 [2024-11-25 12:18:43.655151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:42.647 [2024-11-25 12:18:43.655159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:42.647 [2024-11-25 12:18:43.655165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:42.647 [2024-11-25 12:18:43.655170] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:42.647 [2024-11-25 12:18:43.655175] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:42.647 [2024-11-25 12:18:43.655181] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:42.647 [2024-11-25 12:18:43.655186] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:42.647 [2024-11-25 12:18:43.655191] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:42.647 [2024-11-25 12:18:43.655197] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:42.647 [2024-11-25 12:18:43.655204] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:42.647 [2024-11-25 12:18:43.655211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:42.647 [2024-11-25 12:18:43.655217] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:42.647 [2024-11-25 12:18:43.655223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:42.647 [2024-11-25 12:18:43.655228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:42.647 [2024-11-25 12:18:43.655234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:42.647 [2024-11-25 12:18:43.655240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:42.647 [2024-11-25 12:18:43.655245] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:42.647 [2024-11-25 12:18:43.655250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:42.647 [2024-11-25 12:18:43.655256] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:42.647 [2024-11-25 12:18:43.655262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:42.648 [2024-11-25 12:18:43.655268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:42.648 [2024-11-25 12:18:43.655273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:42.648 [2024-11-25 12:18:43.655279] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:42.648 [2024-11-25 12:18:43.655284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:42.648 [2024-11-25 12:18:43.655290] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:42.648 [2024-11-25 12:18:43.655296] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:42.648 [2024-11-25 12:18:43.655302] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:42.648 [2024-11-25 12:18:43.655308] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:42.648 [2024-11-25 12:18:43.655314] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:42.648 [2024-11-25 12:18:43.655319] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:42.648 [2024-11-25 12:18:43.655325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.648 [2024-11-25 12:18:43.655332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:42.648 [2024-11-25 12:18:43.655340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.514 ms 00:20:42.648 [2024-11-25 12:18:43.655346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.648 [2024-11-25 12:18:43.676647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.648 [2024-11-25 12:18:43.676687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:42.648 [2024-11-25 12:18:43.676698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.248 ms 00:20:42.648 [2024-11-25 12:18:43.676705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.648 [2024-11-25 12:18:43.676821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.648 [2024-11-25 12:18:43.676832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:42.648 [2024-11-25 12:18:43.676838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:20:42.648 [2024-11-25 12:18:43.676844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.648 [2024-11-25 12:18:43.714358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.648 [2024-11-25 12:18:43.714403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:42.648 [2024-11-25 12:18:43.714414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.495 ms 00:20:42.648 [2024-11-25 12:18:43.714424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.648 [2024-11-25 12:18:43.714515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.648 [2024-11-25 12:18:43.714524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:42.648 [2024-11-25 12:18:43.714531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:42.648 [2024-11-25 12:18:43.714537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.648 [2024-11-25 12:18:43.714845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.648 [2024-11-25 12:18:43.714866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:42.648 [2024-11-25 12:18:43.714874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:20:42.648 [2024-11-25 12:18:43.714879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.648 [2024-11-25 12:18:43.715004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.648 [2024-11-25 12:18:43.715017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:42.648 [2024-11-25 12:18:43.715024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:20:42.648 [2024-11-25 12:18:43.715030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.905 [2024-11-25 12:18:43.726099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.905 [2024-11-25 12:18:43.726130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:42.905 [2024-11-25 12:18:43.726139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.051 ms 00:20:42.905 [2024-11-25 12:18:43.726146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.905 [2024-11-25 12:18:43.736242] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:42.905 [2024-11-25 12:18:43.736273] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:42.905 [2024-11-25 12:18:43.736284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.905 [2024-11-25 12:18:43.736291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:42.905 [2024-11-25 12:18:43.736299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.031 ms 00:20:42.905 [2024-11-25 12:18:43.736305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.905 [2024-11-25 12:18:43.755071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.906 [2024-11-25 12:18:43.755108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:42.906 [2024-11-25 12:18:43.755117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.700 ms 00:20:42.906 [2024-11-25 12:18:43.755123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.906 [2024-11-25 12:18:43.763989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.906 [2024-11-25 12:18:43.764017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:42.906 [2024-11-25 12:18:43.764025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.817 ms 00:20:42.906 [2024-11-25 12:18:43.764032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.906 [2024-11-25 12:18:43.772896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.906 [2024-11-25 12:18:43.772921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:42.906 [2024-11-25 12:18:43.772929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.820 ms 00:20:42.906 [2024-11-25 12:18:43.772935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.906 [2024-11-25 12:18:43.773428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.906 [2024-11-25 12:18:43.773450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:42.906 [2024-11-25 12:18:43.773457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.423 ms 00:20:42.906 [2024-11-25 12:18:43.773464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.906 [2024-11-25 12:18:43.817677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.906 [2024-11-25 12:18:43.817719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:42.906 [2024-11-25 12:18:43.817729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.193 ms 00:20:42.906 [2024-11-25 12:18:43.817735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.906 [2024-11-25 12:18:43.825838] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:42.906 [2024-11-25 12:18:43.837820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.906 [2024-11-25 12:18:43.837852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:42.906 [2024-11-25 12:18:43.837862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.000 ms 00:20:42.906 [2024-11-25 12:18:43.837870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.906 [2024-11-25 12:18:43.837981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.906 [2024-11-25 12:18:43.837990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:42.906 [2024-11-25 12:18:43.837997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:42.906 [2024-11-25 12:18:43.838003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.906 [2024-11-25 12:18:43.838042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.906 [2024-11-25 12:18:43.838049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:42.906 [2024-11-25 12:18:43.838056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:20:42.906 [2024-11-25 12:18:43.838062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.906 [2024-11-25 12:18:43.838080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.906 [2024-11-25 12:18:43.838088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:42.906 [2024-11-25 12:18:43.838094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:42.906 [2024-11-25 12:18:43.838100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.906 [2024-11-25 12:18:43.838125] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:42.906 [2024-11-25 12:18:43.838133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.906 [2024-11-25 12:18:43.838139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:42.906 [2024-11-25 12:18:43.838146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:42.906 [2024-11-25 12:18:43.838152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.906 [2024-11-25 12:18:43.857040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.906 [2024-11-25 12:18:43.857071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:42.906 [2024-11-25 12:18:43.857080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.874 ms 00:20:42.906 [2024-11-25 12:18:43.857086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.906 [2024-11-25 12:18:43.857160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.906 [2024-11-25 12:18:43.857169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:42.906 [2024-11-25 12:18:43.857175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:20:42.906 [2024-11-25 12:18:43.857182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.906 [2024-11-25 12:18:43.857832] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:42.906 [2024-11-25 12:18:43.860207] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 224.411 ms, result 0 00:20:42.906 [2024-11-25 12:18:43.861981] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:42.906 [2024-11-25 12:18:43.883204] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:42.906  [2024-11-25T12:18:43.986Z] Copying: 4096/4096 [kB] (average 42 MBps)[2024-11-25 12:18:43.980687] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:43.166 [2024-11-25 12:18:43.990013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.166 [2024-11-25 12:18:43.990067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:43.166 [2024-11-25 12:18:43.990081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:43.166 [2024-11-25 12:18:43.990095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.166 [2024-11-25 12:18:43.990116] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:43.166 [2024-11-25 12:18:43.992760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.166 [2024-11-25 12:18:43.992793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:43.166 [2024-11-25 12:18:43.992803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.630 ms 00:20:43.166 [2024-11-25 12:18:43.992810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.166 [2024-11-25 12:18:43.994583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.166 [2024-11-25 12:18:43.994618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:43.166 [2024-11-25 12:18:43.994628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.750 ms 00:20:43.166 [2024-11-25 12:18:43.994635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.166 [2024-11-25 12:18:43.998617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.166 [2024-11-25 12:18:43.998652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:43.166 [2024-11-25 12:18:43.998661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.965 ms 00:20:43.166 [2024-11-25 12:18:43.998668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.166 [2024-11-25 12:18:44.005698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.166 [2024-11-25 12:18:44.005731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:43.166 [2024-11-25 12:18:44.005741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.003 ms 00:20:43.166 [2024-11-25 12:18:44.005750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.166 [2024-11-25 12:18:44.028714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.166 [2024-11-25 12:18:44.028747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:43.166 [2024-11-25 12:18:44.028757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.910 ms 00:20:43.166 [2024-11-25 12:18:44.028765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.166 [2024-11-25 12:18:44.042964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.166 [2024-11-25 12:18:44.043008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:43.166 [2024-11-25 12:18:44.043021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.163 ms 00:20:43.166 [2024-11-25 12:18:44.043029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.166 [2024-11-25 12:18:44.043166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.166 [2024-11-25 12:18:44.043176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:43.166 [2024-11-25 12:18:44.043184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:20:43.166 [2024-11-25 12:18:44.043192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.166 [2024-11-25 12:18:44.066015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.166 [2024-11-25 12:18:44.066048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:43.166 [2024-11-25 12:18:44.066058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.800 ms 00:20:43.166 [2024-11-25 12:18:44.066066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.166 [2024-11-25 12:18:44.088414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.166 [2024-11-25 12:18:44.088446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:43.166 [2024-11-25 12:18:44.088456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.314 ms 00:20:43.166 [2024-11-25 12:18:44.088463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.166 [2024-11-25 12:18:44.110376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.166 [2024-11-25 12:18:44.110408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:43.166 [2024-11-25 12:18:44.110419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.878 ms 00:20:43.166 [2024-11-25 12:18:44.110426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.166 [2024-11-25 12:18:44.132571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.166 [2024-11-25 12:18:44.132604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:43.166 [2024-11-25 12:18:44.132614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.084 ms 00:20:43.167 [2024-11-25 12:18:44.132621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.167 [2024-11-25 12:18:44.132654] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:43.167 [2024-11-25 12:18:44.132669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.132679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.132687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.132695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.132702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.132710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.132717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.132725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.132732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.132739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.132747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.132754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.132762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.132769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.132777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.132784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.132792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.132800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.132807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.132814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.132821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.132828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.132835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.132843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.132849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.132857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.132864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.132871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.132878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.132887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.132895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.132902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.132909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.132916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.132923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.132930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.132937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.132944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.132962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.132969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.132976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.132984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.132991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.132998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.133006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.133013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.133021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.133028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.133036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.133044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.133052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.133059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.133067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.133074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.133082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.133090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.133097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.133105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.133112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.133123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.133130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.133138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.133145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.133152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.133159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.133167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.133175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.133182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.133189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.133196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.133204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.133211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.133219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.133226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.133233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.133240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.133248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.133255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.133262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.133270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.133277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.133284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.133292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.133300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.133307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.133314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.133321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:43.167 [2024-11-25 12:18:44.133328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:43.168 [2024-11-25 12:18:44.133336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:43.168 [2024-11-25 12:18:44.133343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:43.168 [2024-11-25 12:18:44.133350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:43.168 [2024-11-25 12:18:44.133357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:43.168 [2024-11-25 12:18:44.133364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:43.168 [2024-11-25 12:18:44.133380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:43.168 [2024-11-25 12:18:44.133388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:43.168 [2024-11-25 12:18:44.133402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:43.168 [2024-11-25 12:18:44.133410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:43.168 [2024-11-25 12:18:44.133418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:43.168 [2024-11-25 12:18:44.133425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:43.168 [2024-11-25 12:18:44.133433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:43.168 [2024-11-25 12:18:44.133449] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:43.168 [2024-11-25 12:18:44.133457] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c35f6ec4-de26-452c-bcbe-87dd6023e02d 00:20:43.168 [2024-11-25 12:18:44.133465] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:43.168 [2024-11-25 12:18:44.133472] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:43.168 [2024-11-25 12:18:44.133480] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:43.168 [2024-11-25 12:18:44.133488] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:43.168 [2024-11-25 12:18:44.133494] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:43.168 [2024-11-25 12:18:44.133502] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:43.168 [2024-11-25 12:18:44.133509] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:43.168 [2024-11-25 12:18:44.133515] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:43.168 [2024-11-25 12:18:44.133522] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:43.168 [2024-11-25 12:18:44.133529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.168 [2024-11-25 12:18:44.133538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:43.168 [2024-11-25 12:18:44.133547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.875 ms 00:20:43.168 [2024-11-25 12:18:44.133554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.168 [2024-11-25 12:18:44.145715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.168 [2024-11-25 12:18:44.145746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:43.168 [2024-11-25 12:18:44.145758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.145 ms 00:20:43.168 [2024-11-25 12:18:44.145767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.168 [2024-11-25 12:18:44.146147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.168 [2024-11-25 12:18:44.146164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:43.168 [2024-11-25 12:18:44.146173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.333 ms 00:20:43.168 [2024-11-25 12:18:44.146181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.168 [2024-11-25 12:18:44.180675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:43.168 [2024-11-25 12:18:44.180718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:43.168 [2024-11-25 12:18:44.180729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:43.168 [2024-11-25 12:18:44.180736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.168 [2024-11-25 12:18:44.180817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:43.168 [2024-11-25 12:18:44.180825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:43.168 [2024-11-25 12:18:44.180833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:43.168 [2024-11-25 12:18:44.180839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.168 [2024-11-25 12:18:44.180882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:43.168 [2024-11-25 12:18:44.180890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:43.168 [2024-11-25 12:18:44.180898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:43.168 [2024-11-25 12:18:44.180904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.168 [2024-11-25 12:18:44.180921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:43.168 [2024-11-25 12:18:44.180932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:43.168 [2024-11-25 12:18:44.180939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:43.168 [2024-11-25 12:18:44.180959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.427 [2024-11-25 12:18:44.258336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:43.427 [2024-11-25 12:18:44.258386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:43.427 [2024-11-25 12:18:44.258397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:43.427 [2024-11-25 12:18:44.258405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.427 [2024-11-25 12:18:44.321545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:43.427 [2024-11-25 12:18:44.321593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:43.427 [2024-11-25 12:18:44.321604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:43.427 [2024-11-25 12:18:44.321611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.427 [2024-11-25 12:18:44.321665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:43.427 [2024-11-25 12:18:44.321674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:43.427 [2024-11-25 12:18:44.321682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:43.427 [2024-11-25 12:18:44.321689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.427 [2024-11-25 12:18:44.321717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:43.427 [2024-11-25 12:18:44.321725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:43.427 [2024-11-25 12:18:44.321737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:43.427 [2024-11-25 12:18:44.321744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.427 [2024-11-25 12:18:44.321833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:43.427 [2024-11-25 12:18:44.321843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:43.427 [2024-11-25 12:18:44.321851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:43.427 [2024-11-25 12:18:44.321859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.427 [2024-11-25 12:18:44.321888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:43.427 [2024-11-25 12:18:44.321896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:43.427 [2024-11-25 12:18:44.321903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:43.427 [2024-11-25 12:18:44.321913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.427 [2024-11-25 12:18:44.321965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:43.427 [2024-11-25 12:18:44.321974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:43.427 [2024-11-25 12:18:44.321982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:43.427 [2024-11-25 12:18:44.321989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.427 [2024-11-25 12:18:44.322027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:43.427 [2024-11-25 12:18:44.322036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:43.427 [2024-11-25 12:18:44.322046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:43.427 [2024-11-25 12:18:44.322054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.427 [2024-11-25 12:18:44.322179] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 332.160 ms, result 0 00:20:43.995 00:20:43.995 00:20:43.995 12:18:45 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:20:43.995 12:18:45 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=76916 00:20:43.995 12:18:45 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 76916 00:20:43.995 12:18:45 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76916 ']' 00:20:43.995 12:18:45 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:43.995 12:18:45 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:43.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:43.995 12:18:45 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:43.995 12:18:45 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:43.995 12:18:45 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:44.254 [2024-11-25 12:18:45.096164] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:20:44.254 [2024-11-25 12:18:45.096284] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76916 ] 00:20:44.254 [2024-11-25 12:18:45.248148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:44.512 [2024-11-25 12:18:45.347673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:45.093 12:18:45 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:45.093 12:18:45 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:20:45.093 12:18:45 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:20:45.093 [2024-11-25 12:18:46.142231] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:45.093 [2024-11-25 12:18:46.142301] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:45.353 [2024-11-25 12:18:46.312910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.353 [2024-11-25 12:18:46.312979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:45.353 [2024-11-25 12:18:46.312993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:45.353 [2024-11-25 12:18:46.313000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.353 [2024-11-25 12:18:46.315214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.353 [2024-11-25 12:18:46.315249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:45.353 [2024-11-25 12:18:46.315259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.197 ms 00:20:45.353 [2024-11-25 12:18:46.315264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.353 [2024-11-25 12:18:46.315338] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:45.353 [2024-11-25 12:18:46.315927] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:45.353 [2024-11-25 12:18:46.315958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.353 [2024-11-25 12:18:46.315965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:45.353 [2024-11-25 12:18:46.315974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.629 ms 00:20:45.353 [2024-11-25 12:18:46.315980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.353 [2024-11-25 12:18:46.317175] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:45.353 [2024-11-25 12:18:46.327507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.353 [2024-11-25 12:18:46.327563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:45.353 [2024-11-25 12:18:46.327575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.333 ms 00:20:45.353 [2024-11-25 12:18:46.327583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.353 [2024-11-25 12:18:46.327698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.353 [2024-11-25 12:18:46.327710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:45.353 [2024-11-25 12:18:46.327717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:20:45.353 [2024-11-25 12:18:46.327724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.353 [2024-11-25 12:18:46.332991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.353 [2024-11-25 12:18:46.333035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:45.353 [2024-11-25 12:18:46.333044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.221 ms 00:20:45.353 [2024-11-25 12:18:46.333052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.353 [2024-11-25 12:18:46.333156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.353 [2024-11-25 12:18:46.333166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:45.353 [2024-11-25 12:18:46.333173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:20:45.353 [2024-11-25 12:18:46.333181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.353 [2024-11-25 12:18:46.333206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.353 [2024-11-25 12:18:46.333214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:45.353 [2024-11-25 12:18:46.333221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:45.353 [2024-11-25 12:18:46.333228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.353 [2024-11-25 12:18:46.333249] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:45.353 [2024-11-25 12:18:46.336149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.353 [2024-11-25 12:18:46.336178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:45.353 [2024-11-25 12:18:46.336188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.904 ms 00:20:45.353 [2024-11-25 12:18:46.336195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.353 [2024-11-25 12:18:46.336231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.353 [2024-11-25 12:18:46.336238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:45.353 [2024-11-25 12:18:46.336247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:45.353 [2024-11-25 12:18:46.336255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.353 [2024-11-25 12:18:46.336273] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:45.353 [2024-11-25 12:18:46.336287] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:45.353 [2024-11-25 12:18:46.336322] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:45.353 [2024-11-25 12:18:46.336335] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:45.353 [2024-11-25 12:18:46.336420] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:45.353 [2024-11-25 12:18:46.336429] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:45.353 [2024-11-25 12:18:46.336441] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:45.353 [2024-11-25 12:18:46.336450] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:45.353 [2024-11-25 12:18:46.336458] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:45.353 [2024-11-25 12:18:46.336465] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:45.353 [2024-11-25 12:18:46.336472] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:45.354 [2024-11-25 12:18:46.336477] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:45.354 [2024-11-25 12:18:46.336486] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:45.354 [2024-11-25 12:18:46.336492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.354 [2024-11-25 12:18:46.336500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:45.354 [2024-11-25 12:18:46.336506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.223 ms 00:20:45.354 [2024-11-25 12:18:46.336513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.354 [2024-11-25 12:18:46.336585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.354 [2024-11-25 12:18:46.336593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:45.354 [2024-11-25 12:18:46.336599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:20:45.354 [2024-11-25 12:18:46.336605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.354 [2024-11-25 12:18:46.336687] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:45.354 [2024-11-25 12:18:46.336744] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:45.354 [2024-11-25 12:18:46.336751] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:45.354 [2024-11-25 12:18:46.336758] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:45.354 [2024-11-25 12:18:46.336764] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:45.354 [2024-11-25 12:18:46.336771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:45.354 [2024-11-25 12:18:46.336778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:45.354 [2024-11-25 12:18:46.336788] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:45.354 [2024-11-25 12:18:46.336793] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:45.354 [2024-11-25 12:18:46.336800] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:45.354 [2024-11-25 12:18:46.336806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:45.354 [2024-11-25 12:18:46.336812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:45.354 [2024-11-25 12:18:46.336817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:45.354 [2024-11-25 12:18:46.336824] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:45.354 [2024-11-25 12:18:46.336830] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:45.354 [2024-11-25 12:18:46.336836] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:45.354 [2024-11-25 12:18:46.336841] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:45.354 [2024-11-25 12:18:46.336848] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:45.354 [2024-11-25 12:18:46.336853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:45.354 [2024-11-25 12:18:46.336860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:45.354 [2024-11-25 12:18:46.336870] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:45.354 [2024-11-25 12:18:46.336876] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:45.354 [2024-11-25 12:18:46.336882] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:45.354 [2024-11-25 12:18:46.336889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:45.354 [2024-11-25 12:18:46.336895] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:45.354 [2024-11-25 12:18:46.336901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:45.354 [2024-11-25 12:18:46.336907] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:45.354 [2024-11-25 12:18:46.336913] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:45.354 [2024-11-25 12:18:46.336921] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:45.354 [2024-11-25 12:18:46.336928] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:45.354 [2024-11-25 12:18:46.336933] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:45.354 [2024-11-25 12:18:46.336940] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:45.354 [2024-11-25 12:18:46.336955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:45.354 [2024-11-25 12:18:46.336975] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:45.354 [2024-11-25 12:18:46.336980] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:45.354 [2024-11-25 12:18:46.336987] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:45.354 [2024-11-25 12:18:46.336992] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:45.354 [2024-11-25 12:18:46.336999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:45.354 [2024-11-25 12:18:46.337004] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:45.354 [2024-11-25 12:18:46.337013] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:45.354 [2024-11-25 12:18:46.337018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:45.354 [2024-11-25 12:18:46.337025] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:45.354 [2024-11-25 12:18:46.337030] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:45.354 [2024-11-25 12:18:46.337037] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:45.354 [2024-11-25 12:18:46.337043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:45.354 [2024-11-25 12:18:46.337051] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:45.354 [2024-11-25 12:18:46.337057] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:45.354 [2024-11-25 12:18:46.337064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:45.354 [2024-11-25 12:18:46.337069] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:45.354 [2024-11-25 12:18:46.337076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:45.354 [2024-11-25 12:18:46.337081] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:45.354 [2024-11-25 12:18:46.337088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:45.354 [2024-11-25 12:18:46.337093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:45.354 [2024-11-25 12:18:46.337101] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:45.354 [2024-11-25 12:18:46.337109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:45.354 [2024-11-25 12:18:46.337118] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:45.354 [2024-11-25 12:18:46.337123] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:45.354 [2024-11-25 12:18:46.337132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:45.354 [2024-11-25 12:18:46.337137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:45.354 [2024-11-25 12:18:46.337144] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:45.354 [2024-11-25 12:18:46.337151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:45.354 [2024-11-25 12:18:46.337158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:45.354 [2024-11-25 12:18:46.337164] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:45.354 [2024-11-25 12:18:46.337171] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:45.354 [2024-11-25 12:18:46.337177] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:45.354 [2024-11-25 12:18:46.337184] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:45.354 [2024-11-25 12:18:46.337190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:45.354 [2024-11-25 12:18:46.337197] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:45.354 [2024-11-25 12:18:46.337203] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:45.354 [2024-11-25 12:18:46.337210] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:45.354 [2024-11-25 12:18:46.337217] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:45.354 [2024-11-25 12:18:46.337226] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:45.354 [2024-11-25 12:18:46.337232] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:45.354 [2024-11-25 12:18:46.337239] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:45.354 [2024-11-25 12:18:46.337245] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:45.354 [2024-11-25 12:18:46.337252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.354 [2024-11-25 12:18:46.337258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:45.354 [2024-11-25 12:18:46.337265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.619 ms 00:20:45.354 [2024-11-25 12:18:46.337271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.354 [2024-11-25 12:18:46.359001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.354 [2024-11-25 12:18:46.359043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:45.354 [2024-11-25 12:18:46.359054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.663 ms 00:20:45.354 [2024-11-25 12:18:46.359061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.354 [2024-11-25 12:18:46.359186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.354 [2024-11-25 12:18:46.359193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:45.354 [2024-11-25 12:18:46.359201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:20:45.355 [2024-11-25 12:18:46.359207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.355 [2024-11-25 12:18:46.383883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.355 [2024-11-25 12:18:46.383929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:45.355 [2024-11-25 12:18:46.383944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.655 ms 00:20:45.355 [2024-11-25 12:18:46.383957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.355 [2024-11-25 12:18:46.384031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.355 [2024-11-25 12:18:46.384039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:45.355 [2024-11-25 12:18:46.384047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:45.355 [2024-11-25 12:18:46.384053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.355 [2024-11-25 12:18:46.384359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.355 [2024-11-25 12:18:46.384382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:45.355 [2024-11-25 12:18:46.384391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:20:45.355 [2024-11-25 12:18:46.384399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.355 [2024-11-25 12:18:46.384505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.355 [2024-11-25 12:18:46.384516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:45.355 [2024-11-25 12:18:46.384523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:20:45.355 [2024-11-25 12:18:46.384529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.355 [2024-11-25 12:18:46.396537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.355 [2024-11-25 12:18:46.396576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:45.355 [2024-11-25 12:18:46.396586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.987 ms 00:20:45.355 [2024-11-25 12:18:46.396593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.355 [2024-11-25 12:18:46.406539] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:45.355 [2024-11-25 12:18:46.406581] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:45.355 [2024-11-25 12:18:46.406593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.355 [2024-11-25 12:18:46.406601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:45.355 [2024-11-25 12:18:46.406609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.897 ms 00:20:45.355 [2024-11-25 12:18:46.406616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.355 [2024-11-25 12:18:46.426051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.355 [2024-11-25 12:18:46.426106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:45.355 [2024-11-25 12:18:46.426118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.331 ms 00:20:45.355 [2024-11-25 12:18:46.426124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.614 [2024-11-25 12:18:46.436110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.614 [2024-11-25 12:18:46.436158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:45.614 [2024-11-25 12:18:46.436171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.900 ms 00:20:45.614 [2024-11-25 12:18:46.436177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.614 [2024-11-25 12:18:46.445351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.614 [2024-11-25 12:18:46.445396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:45.614 [2024-11-25 12:18:46.445408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.084 ms 00:20:45.614 [2024-11-25 12:18:46.445414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.614 [2024-11-25 12:18:46.445934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.614 [2024-11-25 12:18:46.445963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:45.614 [2024-11-25 12:18:46.445973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.431 ms 00:20:45.614 [2024-11-25 12:18:46.445979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.614 [2024-11-25 12:18:46.512339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.614 [2024-11-25 12:18:46.512383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:45.614 [2024-11-25 12:18:46.512398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.336 ms 00:20:45.614 [2024-11-25 12:18:46.512405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.614 [2024-11-25 12:18:46.520904] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:45.614 [2024-11-25 12:18:46.533027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.614 [2024-11-25 12:18:46.533068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:45.614 [2024-11-25 12:18:46.533081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.522 ms 00:20:45.614 [2024-11-25 12:18:46.533090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.614 [2024-11-25 12:18:46.533167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.614 [2024-11-25 12:18:46.533176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:45.614 [2024-11-25 12:18:46.533183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:45.614 [2024-11-25 12:18:46.533190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.614 [2024-11-25 12:18:46.533230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.614 [2024-11-25 12:18:46.533239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:45.614 [2024-11-25 12:18:46.533245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:20:45.614 [2024-11-25 12:18:46.533252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.614 [2024-11-25 12:18:46.533272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.614 [2024-11-25 12:18:46.533280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:45.614 [2024-11-25 12:18:46.533286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:45.614 [2024-11-25 12:18:46.533296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.614 [2024-11-25 12:18:46.533320] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:45.614 [2024-11-25 12:18:46.533330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.614 [2024-11-25 12:18:46.533336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:45.614 [2024-11-25 12:18:46.533345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:45.614 [2024-11-25 12:18:46.533351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.614 [2024-11-25 12:18:46.551525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.614 [2024-11-25 12:18:46.551560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:45.614 [2024-11-25 12:18:46.551573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.153 ms 00:20:45.614 [2024-11-25 12:18:46.551580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.614 [2024-11-25 12:18:46.551663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.614 [2024-11-25 12:18:46.551671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:45.614 [2024-11-25 12:18:46.551680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:20:45.614 [2024-11-25 12:18:46.551688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.614 [2024-11-25 12:18:46.552680] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:45.614 [2024-11-25 12:18:46.555363] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 239.540 ms, result 0 00:20:45.614 [2024-11-25 12:18:46.555956] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:45.614 Some configs were skipped because the RPC state that can call them passed over. 00:20:45.614 12:18:46 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:20:45.872 [2024-11-25 12:18:46.780087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.872 [2024-11-25 12:18:46.780139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:45.872 [2024-11-25 12:18:46.780149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.203 ms 00:20:45.872 [2024-11-25 12:18:46.780157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.872 [2024-11-25 12:18:46.780185] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.305 ms, result 0 00:20:45.872 true 00:20:45.872 12:18:46 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:20:46.130 [2024-11-25 12:18:47.007826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.130 [2024-11-25 12:18:47.007876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:46.130 [2024-11-25 12:18:47.007888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.689 ms 00:20:46.130 [2024-11-25 12:18:47.007894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.130 [2024-11-25 12:18:47.007924] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 0.791 ms, result 0 00:20:46.130 true 00:20:46.130 12:18:47 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 76916 00:20:46.130 12:18:47 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76916 ']' 00:20:46.130 12:18:47 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76916 00:20:46.130 12:18:47 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:20:46.130 12:18:47 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:46.130 12:18:47 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76916 00:20:46.130 12:18:47 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:46.130 12:18:47 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:46.130 killing process with pid 76916 00:20:46.130 12:18:47 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76916' 00:20:46.130 12:18:47 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76916 00:20:46.130 12:18:47 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76916 00:20:46.696 [2024-11-25 12:18:47.610252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.696 [2024-11-25 12:18:47.610312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:46.696 [2024-11-25 12:18:47.610322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:46.696 [2024-11-25 12:18:47.610331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.696 [2024-11-25 12:18:47.610348] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:46.696 [2024-11-25 12:18:47.612483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.696 [2024-11-25 12:18:47.612512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:46.696 [2024-11-25 12:18:47.612525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.120 ms 00:20:46.697 [2024-11-25 12:18:47.612533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.697 [2024-11-25 12:18:47.612761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.697 [2024-11-25 12:18:47.612776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:46.697 [2024-11-25 12:18:47.612785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.208 ms 00:20:46.697 [2024-11-25 12:18:47.612792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.697 [2024-11-25 12:18:47.616151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.697 [2024-11-25 12:18:47.616178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:46.697 [2024-11-25 12:18:47.616189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.342 ms 00:20:46.697 [2024-11-25 12:18:47.616195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.697 [2024-11-25 12:18:47.621669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.697 [2024-11-25 12:18:47.621702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:46.697 [2024-11-25 12:18:47.621712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.441 ms 00:20:46.697 [2024-11-25 12:18:47.621718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.697 [2024-11-25 12:18:47.629187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.697 [2024-11-25 12:18:47.629225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:46.697 [2024-11-25 12:18:47.629236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.405 ms 00:20:46.697 [2024-11-25 12:18:47.629249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.697 [2024-11-25 12:18:47.634876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.697 [2024-11-25 12:18:47.634915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:46.697 [2024-11-25 12:18:47.634928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.587 ms 00:20:46.697 [2024-11-25 12:18:47.634935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.697 [2024-11-25 12:18:47.635037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.697 [2024-11-25 12:18:47.635045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:46.697 [2024-11-25 12:18:47.635053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:20:46.697 [2024-11-25 12:18:47.635059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.697 [2024-11-25 12:18:47.642944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.697 [2024-11-25 12:18:47.642984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:46.697 [2024-11-25 12:18:47.642994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.865 ms 00:20:46.697 [2024-11-25 12:18:47.643000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.697 [2024-11-25 12:18:47.650113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.697 [2024-11-25 12:18:47.650141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:46.697 [2024-11-25 12:18:47.650152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.079 ms 00:20:46.697 [2024-11-25 12:18:47.650158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.697 [2024-11-25 12:18:47.656931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.697 [2024-11-25 12:18:47.656968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:46.697 [2024-11-25 12:18:47.656981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.732 ms 00:20:46.697 [2024-11-25 12:18:47.656988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.697 [2024-11-25 12:18:47.663985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.697 [2024-11-25 12:18:47.664011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:46.697 [2024-11-25 12:18:47.664020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.945 ms 00:20:46.697 [2024-11-25 12:18:47.664026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.697 [2024-11-25 12:18:47.664054] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:46.697 [2024-11-25 12:18:47.664066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:46.697 [2024-11-25 12:18:47.664075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:46.697 [2024-11-25 12:18:47.664082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:46.697 [2024-11-25 12:18:47.664090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:46.697 [2024-11-25 12:18:47.664096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:46.697 [2024-11-25 12:18:47.664105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:46.697 [2024-11-25 12:18:47.664111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:46.697 [2024-11-25 12:18:47.664118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:46.697 [2024-11-25 12:18:47.664124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:46.697 [2024-11-25 12:18:47.664131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:46.697 [2024-11-25 12:18:47.664137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:46.697 [2024-11-25 12:18:47.664145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:46.697 [2024-11-25 12:18:47.664151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:46.697 [2024-11-25 12:18:47.664158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:46.697 [2024-11-25 12:18:47.664164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:46.697 [2024-11-25 12:18:47.664171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:46.697 [2024-11-25 12:18:47.664177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:46.697 [2024-11-25 12:18:47.664184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:46.697 [2024-11-25 12:18:47.664190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:46.697 [2024-11-25 12:18:47.664199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:46.697 [2024-11-25 12:18:47.664205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:46.697 [2024-11-25 12:18:47.664214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:46.697 [2024-11-25 12:18:47.664220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:46.697 [2024-11-25 12:18:47.664227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:46.697 [2024-11-25 12:18:47.664233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:46.697 [2024-11-25 12:18:47.664240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:46.697 [2024-11-25 12:18:47.664247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:46.697 [2024-11-25 12:18:47.664253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:46.697 [2024-11-25 12:18:47.664259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:46.697 [2024-11-25 12:18:47.664266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:46.697 [2024-11-25 12:18:47.664272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:46.697 [2024-11-25 12:18:47.664280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:46.697 [2024-11-25 12:18:47.664286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:46.697 [2024-11-25 12:18:47.664293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:46.697 [2024-11-25 12:18:47.664300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:46.697 [2024-11-25 12:18:47.664308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:46.697 [2024-11-25 12:18:47.664314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:46.697 [2024-11-25 12:18:47.664322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:46.697 [2024-11-25 12:18:47.664328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:46.697 [2024-11-25 12:18:47.664335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:46.697 [2024-11-25 12:18:47.664341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:46.697 [2024-11-25 12:18:47.664349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:46.697 [2024-11-25 12:18:47.664355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:46.697 [2024-11-25 12:18:47.664362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:46.697 [2024-11-25 12:18:47.664368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:46.697 [2024-11-25 12:18:47.664377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:46.698 [2024-11-25 12:18:47.664383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:46.698 [2024-11-25 12:18:47.664390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:46.698 [2024-11-25 12:18:47.664396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:46.698 [2024-11-25 12:18:47.664402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:46.698 [2024-11-25 12:18:47.664409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:46.698 [2024-11-25 12:18:47.664417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:46.698 [2024-11-25 12:18:47.664423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:46.698 [2024-11-25 12:18:47.664431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:46.698 [2024-11-25 12:18:47.664436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:46.698 [2024-11-25 12:18:47.664444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:46.698 [2024-11-25 12:18:47.664450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:46.698 [2024-11-25 12:18:47.664457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:46.698 [2024-11-25 12:18:47.664463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:46.698 [2024-11-25 12:18:47.664470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:46.698 [2024-11-25 12:18:47.664476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:46.698 [2024-11-25 12:18:47.664484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:46.698 [2024-11-25 12:18:47.664490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:46.698 [2024-11-25 12:18:47.664496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:46.698 [2024-11-25 12:18:47.664502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:46.698 [2024-11-25 12:18:47.664509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:46.698 [2024-11-25 12:18:47.664516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:46.698 [2024-11-25 12:18:47.664523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:46.698 [2024-11-25 12:18:47.664529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:46.698 [2024-11-25 12:18:47.664537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:46.698 [2024-11-25 12:18:47.664544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:46.698 [2024-11-25 12:18:47.664552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:46.698 [2024-11-25 12:18:47.664558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:46.698 [2024-11-25 12:18:47.664565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:46.698 [2024-11-25 12:18:47.664571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:46.698 [2024-11-25 12:18:47.664578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:46.698 [2024-11-25 12:18:47.664583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:46.698 [2024-11-25 12:18:47.664591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:46.698 [2024-11-25 12:18:47.664597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:46.698 [2024-11-25 12:18:47.664604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:46.698 [2024-11-25 12:18:47.664610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:46.698 [2024-11-25 12:18:47.664617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:46.698 [2024-11-25 12:18:47.664623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:46.698 [2024-11-25 12:18:47.664630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:46.698 [2024-11-25 12:18:47.664636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:46.698 [2024-11-25 12:18:47.664645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:46.698 [2024-11-25 12:18:47.664650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:46.698 [2024-11-25 12:18:47.664658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:46.698 [2024-11-25 12:18:47.664664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:46.698 [2024-11-25 12:18:47.664671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:46.698 [2024-11-25 12:18:47.664677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:46.698 [2024-11-25 12:18:47.664684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:46.698 [2024-11-25 12:18:47.664690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:46.698 [2024-11-25 12:18:47.664697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:46.698 [2024-11-25 12:18:47.664703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:46.698 [2024-11-25 12:18:47.664709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:46.698 [2024-11-25 12:18:47.664716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:46.698 [2024-11-25 12:18:47.664723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:46.698 [2024-11-25 12:18:47.664730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:46.698 [2024-11-25 12:18:47.664739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:46.698 [2024-11-25 12:18:47.664752] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:46.698 [2024-11-25 12:18:47.664764] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c35f6ec4-de26-452c-bcbe-87dd6023e02d 00:20:46.698 [2024-11-25 12:18:47.664776] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:46.698 [2024-11-25 12:18:47.664786] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:46.698 [2024-11-25 12:18:47.664792] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:46.698 [2024-11-25 12:18:47.664799] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:46.698 [2024-11-25 12:18:47.664805] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:46.698 [2024-11-25 12:18:47.664813] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:46.698 [2024-11-25 12:18:47.664819] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:46.698 [2024-11-25 12:18:47.664825] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:46.698 [2024-11-25 12:18:47.664830] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:46.698 [2024-11-25 12:18:47.664838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.698 [2024-11-25 12:18:47.664844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:46.698 [2024-11-25 12:18:47.664852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.784 ms 00:20:46.698 [2024-11-25 12:18:47.664858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.698 [2024-11-25 12:18:47.674559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.698 [2024-11-25 12:18:47.674586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:46.698 [2024-11-25 12:18:47.674597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.682 ms 00:20:46.698 [2024-11-25 12:18:47.674603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.698 [2024-11-25 12:18:47.674893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.698 [2024-11-25 12:18:47.674906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:46.698 [2024-11-25 12:18:47.674915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:20:46.698 [2024-11-25 12:18:47.674923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.698 [2024-11-25 12:18:47.710079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.698 [2024-11-25 12:18:47.710125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:46.698 [2024-11-25 12:18:47.710136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.698 [2024-11-25 12:18:47.710142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.698 [2024-11-25 12:18:47.710248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.698 [2024-11-25 12:18:47.710256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:46.698 [2024-11-25 12:18:47.710264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.698 [2024-11-25 12:18:47.710272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.698 [2024-11-25 12:18:47.710316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.698 [2024-11-25 12:18:47.710325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:46.698 [2024-11-25 12:18:47.710335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.698 [2024-11-25 12:18:47.710341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.698 [2024-11-25 12:18:47.710357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.698 [2024-11-25 12:18:47.710363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:46.698 [2024-11-25 12:18:47.710371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.698 [2024-11-25 12:18:47.710377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.698 [2024-11-25 12:18:47.771292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.698 [2024-11-25 12:18:47.771338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:46.698 [2024-11-25 12:18:47.771348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.698 [2024-11-25 12:18:47.771355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.956 [2024-11-25 12:18:47.820547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.956 [2024-11-25 12:18:47.820594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:46.956 [2024-11-25 12:18:47.820606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.956 [2024-11-25 12:18:47.820614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.956 [2024-11-25 12:18:47.820688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.956 [2024-11-25 12:18:47.820695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:46.956 [2024-11-25 12:18:47.820705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.956 [2024-11-25 12:18:47.820711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.956 [2024-11-25 12:18:47.820737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.957 [2024-11-25 12:18:47.820744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:46.957 [2024-11-25 12:18:47.820752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.957 [2024-11-25 12:18:47.820758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.957 [2024-11-25 12:18:47.820836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.957 [2024-11-25 12:18:47.820843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:46.957 [2024-11-25 12:18:47.820851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.957 [2024-11-25 12:18:47.820856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.957 [2024-11-25 12:18:47.820883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.957 [2024-11-25 12:18:47.820890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:46.957 [2024-11-25 12:18:47.820897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.957 [2024-11-25 12:18:47.820903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.957 [2024-11-25 12:18:47.820935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.957 [2024-11-25 12:18:47.820944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:46.957 [2024-11-25 12:18:47.820964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.957 [2024-11-25 12:18:47.820971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.957 [2024-11-25 12:18:47.821007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.957 [2024-11-25 12:18:47.821037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:46.957 [2024-11-25 12:18:47.821045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.957 [2024-11-25 12:18:47.821051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.957 [2024-11-25 12:18:47.821159] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 210.890 ms, result 0 00:20:47.523 12:18:48 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:47.523 [2024-11-25 12:18:48.407119] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:20:47.523 [2024-11-25 12:18:48.407244] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76963 ] 00:20:47.523 [2024-11-25 12:18:48.559244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:47.782 [2024-11-25 12:18:48.643317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:47.782 [2024-11-25 12:18:48.857563] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:47.782 [2024-11-25 12:18:48.857619] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:48.080 [2024-11-25 12:18:49.008608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.080 [2024-11-25 12:18:49.008660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:48.080 [2024-11-25 12:18:49.008673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:48.080 [2024-11-25 12:18:49.008681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.080 [2024-11-25 12:18:49.011370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.080 [2024-11-25 12:18:49.011402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:48.080 [2024-11-25 12:18:49.011412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.670 ms 00:20:48.080 [2024-11-25 12:18:49.011420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.080 [2024-11-25 12:18:49.011549] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:48.080 [2024-11-25 12:18:49.012302] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:48.080 [2024-11-25 12:18:49.012327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.080 [2024-11-25 12:18:49.012334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:48.080 [2024-11-25 12:18:49.012343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.787 ms 00:20:48.080 [2024-11-25 12:18:49.012351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.080 [2024-11-25 12:18:49.013802] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:48.080 [2024-11-25 12:18:49.025782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.080 [2024-11-25 12:18:49.025822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:48.080 [2024-11-25 12:18:49.025837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.981 ms 00:20:48.080 [2024-11-25 12:18:49.025845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.080 [2024-11-25 12:18:49.025962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.080 [2024-11-25 12:18:49.025974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:48.080 [2024-11-25 12:18:49.025983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:20:48.080 [2024-11-25 12:18:49.025990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.080 [2024-11-25 12:18:49.031128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.080 [2024-11-25 12:18:49.031159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:48.080 [2024-11-25 12:18:49.031168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.094 ms 00:20:48.080 [2024-11-25 12:18:49.031176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.080 [2024-11-25 12:18:49.031271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.080 [2024-11-25 12:18:49.031280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:48.080 [2024-11-25 12:18:49.031288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:20:48.080 [2024-11-25 12:18:49.031295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.080 [2024-11-25 12:18:49.031322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.080 [2024-11-25 12:18:49.031333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:48.080 [2024-11-25 12:18:49.031340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:48.080 [2024-11-25 12:18:49.031348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.080 [2024-11-25 12:18:49.031369] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:48.080 [2024-11-25 12:18:49.034685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.080 [2024-11-25 12:18:49.034712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:48.080 [2024-11-25 12:18:49.034720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.323 ms 00:20:48.080 [2024-11-25 12:18:49.034727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.080 [2024-11-25 12:18:49.034763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.080 [2024-11-25 12:18:49.034772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:48.080 [2024-11-25 12:18:49.034780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:48.080 [2024-11-25 12:18:49.034786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.080 [2024-11-25 12:18:49.034804] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:48.080 [2024-11-25 12:18:49.034822] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:48.080 [2024-11-25 12:18:49.034856] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:48.080 [2024-11-25 12:18:49.034870] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:48.080 [2024-11-25 12:18:49.034992] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:48.080 [2024-11-25 12:18:49.035004] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:48.080 [2024-11-25 12:18:49.035014] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:48.080 [2024-11-25 12:18:49.035024] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:48.080 [2024-11-25 12:18:49.035035] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:48.081 [2024-11-25 12:18:49.035042] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:48.081 [2024-11-25 12:18:49.035049] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:48.081 [2024-11-25 12:18:49.035056] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:48.081 [2024-11-25 12:18:49.035064] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:48.081 [2024-11-25 12:18:49.035071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.081 [2024-11-25 12:18:49.035078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:48.081 [2024-11-25 12:18:49.035085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:20:48.081 [2024-11-25 12:18:49.035092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.081 [2024-11-25 12:18:49.035179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.081 [2024-11-25 12:18:49.035186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:48.081 [2024-11-25 12:18:49.035195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:20:48.081 [2024-11-25 12:18:49.035202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.081 [2024-11-25 12:18:49.035315] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:48.081 [2024-11-25 12:18:49.035331] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:48.081 [2024-11-25 12:18:49.035340] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:48.081 [2024-11-25 12:18:49.035348] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:48.081 [2024-11-25 12:18:49.035356] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:48.081 [2024-11-25 12:18:49.035362] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:48.081 [2024-11-25 12:18:49.035370] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:48.081 [2024-11-25 12:18:49.035377] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:48.081 [2024-11-25 12:18:49.035384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:48.081 [2024-11-25 12:18:49.035391] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:48.081 [2024-11-25 12:18:49.035397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:48.081 [2024-11-25 12:18:49.035403] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:48.081 [2024-11-25 12:18:49.035410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:48.081 [2024-11-25 12:18:49.035423] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:48.081 [2024-11-25 12:18:49.035429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:48.081 [2024-11-25 12:18:49.035435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:48.081 [2024-11-25 12:18:49.035442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:48.081 [2024-11-25 12:18:49.035448] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:48.081 [2024-11-25 12:18:49.035454] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:48.081 [2024-11-25 12:18:49.035461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:48.081 [2024-11-25 12:18:49.035467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:48.081 [2024-11-25 12:18:49.035473] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:48.081 [2024-11-25 12:18:49.035480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:48.081 [2024-11-25 12:18:49.035487] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:48.081 [2024-11-25 12:18:49.035493] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:48.081 [2024-11-25 12:18:49.035499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:48.081 [2024-11-25 12:18:49.035505] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:48.081 [2024-11-25 12:18:49.035511] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:48.081 [2024-11-25 12:18:49.035517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:48.081 [2024-11-25 12:18:49.035524] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:48.081 [2024-11-25 12:18:49.035530] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:48.081 [2024-11-25 12:18:49.035536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:48.081 [2024-11-25 12:18:49.035542] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:48.081 [2024-11-25 12:18:49.035548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:48.081 [2024-11-25 12:18:49.035554] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:48.081 [2024-11-25 12:18:49.035561] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:48.081 [2024-11-25 12:18:49.035567] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:48.081 [2024-11-25 12:18:49.035574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:48.081 [2024-11-25 12:18:49.035581] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:48.081 [2024-11-25 12:18:49.035588] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:48.081 [2024-11-25 12:18:49.035594] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:48.081 [2024-11-25 12:18:49.035600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:48.081 [2024-11-25 12:18:49.035606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:48.081 [2024-11-25 12:18:49.035613] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:48.081 [2024-11-25 12:18:49.035621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:48.081 [2024-11-25 12:18:49.035627] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:48.081 [2024-11-25 12:18:49.035636] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:48.081 [2024-11-25 12:18:49.035643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:48.081 [2024-11-25 12:18:49.035651] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:48.081 [2024-11-25 12:18:49.035657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:48.081 [2024-11-25 12:18:49.035663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:48.081 [2024-11-25 12:18:49.035669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:48.081 [2024-11-25 12:18:49.035676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:48.081 [2024-11-25 12:18:49.035684] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:48.081 [2024-11-25 12:18:49.035693] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:48.081 [2024-11-25 12:18:49.035701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:48.081 [2024-11-25 12:18:49.035708] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:48.081 [2024-11-25 12:18:49.035715] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:48.081 [2024-11-25 12:18:49.035722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:48.081 [2024-11-25 12:18:49.035729] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:48.081 [2024-11-25 12:18:49.035736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:48.081 [2024-11-25 12:18:49.035742] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:48.081 [2024-11-25 12:18:49.035749] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:48.081 [2024-11-25 12:18:49.035756] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:48.081 [2024-11-25 12:18:49.035763] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:48.081 [2024-11-25 12:18:49.035770] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:48.081 [2024-11-25 12:18:49.035777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:48.081 [2024-11-25 12:18:49.035784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:48.081 [2024-11-25 12:18:49.035791] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:48.081 [2024-11-25 12:18:49.035798] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:48.081 [2024-11-25 12:18:49.035806] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:48.081 [2024-11-25 12:18:49.035813] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:48.081 [2024-11-25 12:18:49.035820] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:48.081 [2024-11-25 12:18:49.035827] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:48.081 [2024-11-25 12:18:49.035834] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:48.081 [2024-11-25 12:18:49.035841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.081 [2024-11-25 12:18:49.035849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:48.081 [2024-11-25 12:18:49.035858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.594 ms 00:20:48.081 [2024-11-25 12:18:49.035865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.081 [2024-11-25 12:18:49.061507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.081 [2024-11-25 12:18:49.061548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:48.081 [2024-11-25 12:18:49.061558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.574 ms 00:20:48.081 [2024-11-25 12:18:49.061565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.081 [2024-11-25 12:18:49.061707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.081 [2024-11-25 12:18:49.061720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:48.081 [2024-11-25 12:18:49.061729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:20:48.081 [2024-11-25 12:18:49.061736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.081 [2024-11-25 12:18:49.107623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.082 [2024-11-25 12:18:49.107682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:48.082 [2024-11-25 12:18:49.107695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.865 ms 00:20:48.082 [2024-11-25 12:18:49.107706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.082 [2024-11-25 12:18:49.107829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.082 [2024-11-25 12:18:49.107840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:48.082 [2024-11-25 12:18:49.107849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:48.082 [2024-11-25 12:18:49.107857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.082 [2024-11-25 12:18:49.108192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.082 [2024-11-25 12:18:49.108213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:48.082 [2024-11-25 12:18:49.108223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:20:48.082 [2024-11-25 12:18:49.108234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.082 [2024-11-25 12:18:49.108362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.082 [2024-11-25 12:18:49.108371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:48.082 [2024-11-25 12:18:49.108379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:20:48.082 [2024-11-25 12:18:49.108387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.082 [2024-11-25 12:18:49.121654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.082 [2024-11-25 12:18:49.121689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:48.082 [2024-11-25 12:18:49.121700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.247 ms 00:20:48.082 [2024-11-25 12:18:49.121708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.082 [2024-11-25 12:18:49.133869] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:48.082 [2024-11-25 12:18:49.133910] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:48.082 [2024-11-25 12:18:49.133923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.082 [2024-11-25 12:18:49.133931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:48.082 [2024-11-25 12:18:49.133941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.093 ms 00:20:48.082 [2024-11-25 12:18:49.133958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.339 [2024-11-25 12:18:49.158096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.339 [2024-11-25 12:18:49.158158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:48.339 [2024-11-25 12:18:49.158172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.042 ms 00:20:48.340 [2024-11-25 12:18:49.158181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.340 [2024-11-25 12:18:49.169882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.340 [2024-11-25 12:18:49.169916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:48.340 [2024-11-25 12:18:49.169927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.590 ms 00:20:48.340 [2024-11-25 12:18:49.169935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.340 [2024-11-25 12:18:49.181288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.340 [2024-11-25 12:18:49.181321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:48.340 [2024-11-25 12:18:49.181332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.266 ms 00:20:48.340 [2024-11-25 12:18:49.181339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.340 [2024-11-25 12:18:49.182006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.340 [2024-11-25 12:18:49.182027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:48.340 [2024-11-25 12:18:49.182037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.558 ms 00:20:48.340 [2024-11-25 12:18:49.182044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.340 [2024-11-25 12:18:49.241846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.340 [2024-11-25 12:18:49.241916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:48.340 [2024-11-25 12:18:49.241936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.776 ms 00:20:48.340 [2024-11-25 12:18:49.241959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.340 [2024-11-25 12:18:49.252927] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:48.340 [2024-11-25 12:18:49.268219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.340 [2024-11-25 12:18:49.268261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:48.340 [2024-11-25 12:18:49.268273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.091 ms 00:20:48.340 [2024-11-25 12:18:49.268286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.340 [2024-11-25 12:18:49.268384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.340 [2024-11-25 12:18:49.268395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:48.340 [2024-11-25 12:18:49.268404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:48.340 [2024-11-25 12:18:49.268412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.340 [2024-11-25 12:18:49.268461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.340 [2024-11-25 12:18:49.268470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:48.340 [2024-11-25 12:18:49.268478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:20:48.340 [2024-11-25 12:18:49.268487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.340 [2024-11-25 12:18:49.268510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.340 [2024-11-25 12:18:49.268517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:48.340 [2024-11-25 12:18:49.268525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:48.340 [2024-11-25 12:18:49.268532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.340 [2024-11-25 12:18:49.268562] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:48.340 [2024-11-25 12:18:49.268572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.340 [2024-11-25 12:18:49.268579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:48.340 [2024-11-25 12:18:49.268586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:48.340 [2024-11-25 12:18:49.268593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.340 [2024-11-25 12:18:49.291837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.340 [2024-11-25 12:18:49.291895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:48.340 [2024-11-25 12:18:49.291908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.224 ms 00:20:48.340 [2024-11-25 12:18:49.291917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.340 [2024-11-25 12:18:49.292037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.340 [2024-11-25 12:18:49.292048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:48.340 [2024-11-25 12:18:49.292057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:20:48.340 [2024-11-25 12:18:49.292064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.340 [2024-11-25 12:18:49.292931] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:48.340 [2024-11-25 12:18:49.296224] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 284.043 ms, result 0 00:20:48.340 [2024-11-25 12:18:49.297049] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:48.340 [2024-11-25 12:18:49.310097] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:49.713  [2024-11-25T12:18:51.727Z] Copying: 48/256 [MB] (48 MBps) [2024-11-25T12:18:52.661Z] Copying: 92/256 [MB] (44 MBps) [2024-11-25T12:18:53.595Z] Copying: 135/256 [MB] (43 MBps) [2024-11-25T12:18:54.529Z] Copying: 179/256 [MB] (43 MBps) [2024-11-25T12:18:55.464Z] Copying: 221/256 [MB] (42 MBps) [2024-11-25T12:18:55.767Z] Copying: 256/256 [MB] (average 44 MBps)[2024-11-25 12:18:55.573799] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:54.687 [2024-11-25 12:18:55.588921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.687 [2024-11-25 12:18:55.588974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:54.687 [2024-11-25 12:18:55.588987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:54.687 [2024-11-25 12:18:55.589001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.687 [2024-11-25 12:18:55.589025] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:54.687 [2024-11-25 12:18:55.591663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.687 [2024-11-25 12:18:55.591697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:54.687 [2024-11-25 12:18:55.591708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.623 ms 00:20:54.687 [2024-11-25 12:18:55.591717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.687 [2024-11-25 12:18:55.591993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.687 [2024-11-25 12:18:55.592013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:54.687 [2024-11-25 12:18:55.592022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:20:54.687 [2024-11-25 12:18:55.592030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.687 [2024-11-25 12:18:55.595712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.687 [2024-11-25 12:18:55.595738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:54.687 [2024-11-25 12:18:55.595747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.668 ms 00:20:54.687 [2024-11-25 12:18:55.595755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.687 [2024-11-25 12:18:55.602725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.687 [2024-11-25 12:18:55.602758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:54.687 [2024-11-25 12:18:55.602768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.951 ms 00:20:54.687 [2024-11-25 12:18:55.602777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.687 [2024-11-25 12:18:55.627304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.687 [2024-11-25 12:18:55.627359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:54.687 [2024-11-25 12:18:55.627373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.456 ms 00:20:54.687 [2024-11-25 12:18:55.627381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.687 [2024-11-25 12:18:55.641556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.687 [2024-11-25 12:18:55.641624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:54.687 [2024-11-25 12:18:55.641648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.138 ms 00:20:54.687 [2024-11-25 12:18:55.641656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.687 [2024-11-25 12:18:55.641817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.687 [2024-11-25 12:18:55.641828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:54.687 [2024-11-25 12:18:55.641837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:20:54.687 [2024-11-25 12:18:55.641844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.687 [2024-11-25 12:18:55.667462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.687 [2024-11-25 12:18:55.667508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:54.687 [2024-11-25 12:18:55.667520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.593 ms 00:20:54.687 [2024-11-25 12:18:55.667528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.687 [2024-11-25 12:18:55.693492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.687 [2024-11-25 12:18:55.693554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:54.687 [2024-11-25 12:18:55.693570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.927 ms 00:20:54.687 [2024-11-25 12:18:55.693582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.687 [2024-11-25 12:18:55.716661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.687 [2024-11-25 12:18:55.716715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:54.687 [2024-11-25 12:18:55.716728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.010 ms 00:20:54.687 [2024-11-25 12:18:55.716736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.970 [2024-11-25 12:18:55.739360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.970 [2024-11-25 12:18:55.739424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:54.970 [2024-11-25 12:18:55.739436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.560 ms 00:20:54.970 [2024-11-25 12:18:55.739444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.970 [2024-11-25 12:18:55.739474] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:54.970 [2024-11-25 12:18:55.739489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:54.970 [2024-11-25 12:18:55.739499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:54.970 [2024-11-25 12:18:55.739507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:54.970 [2024-11-25 12:18:55.739515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:54.970 [2024-11-25 12:18:55.739523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:54.970 [2024-11-25 12:18:55.739531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:54.970 [2024-11-25 12:18:55.739538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:54.970 [2024-11-25 12:18:55.739546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:54.970 [2024-11-25 12:18:55.739553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:54.970 [2024-11-25 12:18:55.739560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:54.970 [2024-11-25 12:18:55.739568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:54.970 [2024-11-25 12:18:55.739575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:54.970 [2024-11-25 12:18:55.739582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:54.970 [2024-11-25 12:18:55.739590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:54.970 [2024-11-25 12:18:55.739597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:54.970 [2024-11-25 12:18:55.739604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.739611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.739619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.739626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.739633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.739640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.739648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.739655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.739663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.739675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.739683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.739690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.739697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.739704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.739712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.739721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.739729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.739737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.739748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.739756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.739763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.739771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.739778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.739786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.739796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.739804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.739812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.739819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.739827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.739834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.739841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.739849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.739856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.739864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.739874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.739882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.739890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.739897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.739904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.739911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.739918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.739926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.739933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.739940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.739963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.739971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.739979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.739987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.739995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.740002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.740010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.740018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.740025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.740033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.740040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.740048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.740055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.740063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.740074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.740081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.740089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.740096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.740104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.740111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.740118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.740126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.740133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.740141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.740148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.740156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.740163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.740170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.740177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.740185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.740192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.740200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.740207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.740214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.740222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.740230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.740247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.740255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.740263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.740271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.740279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:54.971 [2024-11-25 12:18:55.740295] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:54.971 [2024-11-25 12:18:55.740302] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c35f6ec4-de26-452c-bcbe-87dd6023e02d 00:20:54.971 [2024-11-25 12:18:55.740310] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:54.971 [2024-11-25 12:18:55.740317] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:54.971 [2024-11-25 12:18:55.740325] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:54.972 [2024-11-25 12:18:55.740333] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:54.972 [2024-11-25 12:18:55.740341] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:54.972 [2024-11-25 12:18:55.740348] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:54.972 [2024-11-25 12:18:55.740359] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:54.972 [2024-11-25 12:18:55.740366] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:54.972 [2024-11-25 12:18:55.740373] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:54.972 [2024-11-25 12:18:55.740380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.972 [2024-11-25 12:18:55.740387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:54.972 [2024-11-25 12:18:55.740395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.907 ms 00:20:54.972 [2024-11-25 12:18:55.740402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.972 [2024-11-25 12:18:55.753274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.972 [2024-11-25 12:18:55.753317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:54.972 [2024-11-25 12:18:55.753329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.852 ms 00:20:54.972 [2024-11-25 12:18:55.753337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.972 [2024-11-25 12:18:55.753788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.972 [2024-11-25 12:18:55.753814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:54.972 [2024-11-25 12:18:55.753823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.372 ms 00:20:54.972 [2024-11-25 12:18:55.753830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.972 [2024-11-25 12:18:55.790016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.972 [2024-11-25 12:18:55.790076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:54.972 [2024-11-25 12:18:55.790089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.972 [2024-11-25 12:18:55.790103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.972 [2024-11-25 12:18:55.790222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.972 [2024-11-25 12:18:55.790236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:54.972 [2024-11-25 12:18:55.790245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.972 [2024-11-25 12:18:55.790252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.972 [2024-11-25 12:18:55.790304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.972 [2024-11-25 12:18:55.790314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:54.972 [2024-11-25 12:18:55.790321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.972 [2024-11-25 12:18:55.790328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.972 [2024-11-25 12:18:55.790349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.972 [2024-11-25 12:18:55.790356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:54.972 [2024-11-25 12:18:55.790364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.972 [2024-11-25 12:18:55.790371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.972 [2024-11-25 12:18:55.869036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.972 [2024-11-25 12:18:55.869092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:54.972 [2024-11-25 12:18:55.869103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.972 [2024-11-25 12:18:55.869111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.972 [2024-11-25 12:18:55.932884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.972 [2024-11-25 12:18:55.932937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:54.972 [2024-11-25 12:18:55.932968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.972 [2024-11-25 12:18:55.932977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.972 [2024-11-25 12:18:55.933049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.972 [2024-11-25 12:18:55.933059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:54.972 [2024-11-25 12:18:55.933067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.972 [2024-11-25 12:18:55.933074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.972 [2024-11-25 12:18:55.933102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.972 [2024-11-25 12:18:55.933112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:54.972 [2024-11-25 12:18:55.933120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.972 [2024-11-25 12:18:55.933127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.972 [2024-11-25 12:18:55.933211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.972 [2024-11-25 12:18:55.933220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:54.972 [2024-11-25 12:18:55.933228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.972 [2024-11-25 12:18:55.933235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.972 [2024-11-25 12:18:55.933264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.972 [2024-11-25 12:18:55.933273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:54.972 [2024-11-25 12:18:55.933283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.972 [2024-11-25 12:18:55.933290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.972 [2024-11-25 12:18:55.933324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.972 [2024-11-25 12:18:55.933332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:54.972 [2024-11-25 12:18:55.933340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.972 [2024-11-25 12:18:55.933347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.972 [2024-11-25 12:18:55.933403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.972 [2024-11-25 12:18:55.933417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:54.972 [2024-11-25 12:18:55.933425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.972 [2024-11-25 12:18:55.933432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.972 [2024-11-25 12:18:55.933559] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 344.642 ms, result 0 00:20:55.537 00:20:55.537 00:20:55.794 12:18:56 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:56.361 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:20:56.361 12:18:57 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:20:56.361 12:18:57 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:20:56.361 12:18:57 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:56.361 12:18:57 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:56.361 12:18:57 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:20:56.361 12:18:57 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:56.361 Process with pid 76916 is not found 00:20:56.361 12:18:57 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 76916 00:20:56.361 12:18:57 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76916 ']' 00:20:56.361 12:18:57 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76916 00:20:56.361 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (76916) - No such process 00:20:56.361 12:18:57 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 76916 is not found' 00:20:56.361 00:20:56.361 real 0m49.041s 00:20:56.361 user 1m13.812s 00:20:56.361 sys 0m5.107s 00:20:56.361 12:18:57 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:56.361 12:18:57 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:56.361 ************************************ 00:20:56.361 END TEST ftl_trim 00:20:56.361 ************************************ 00:20:56.361 12:18:57 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:20:56.361 12:18:57 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:56.361 12:18:57 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:56.361 12:18:57 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:56.361 ************************************ 00:20:56.361 START TEST ftl_restore 00:20:56.361 ************************************ 00:20:56.361 12:18:57 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:20:56.361 * Looking for test storage... 00:20:56.361 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:56.361 12:18:57 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:56.361 12:18:57 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:56.361 12:18:57 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lcov --version 00:20:56.361 12:18:57 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:56.361 12:18:57 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:56.361 12:18:57 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:56.361 12:18:57 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:56.361 12:18:57 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:20:56.361 12:18:57 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:20:56.361 12:18:57 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:20:56.361 12:18:57 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:20:56.361 12:18:57 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:20:56.361 12:18:57 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:20:56.361 12:18:57 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:20:56.361 12:18:57 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:56.361 12:18:57 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:20:56.361 12:18:57 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:20:56.361 12:18:57 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:56.361 12:18:57 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:56.361 12:18:57 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:20:56.361 12:18:57 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:20:56.361 12:18:57 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:56.361 12:18:57 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:20:56.361 12:18:57 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:20:56.361 12:18:57 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:20:56.361 12:18:57 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:20:56.361 12:18:57 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:56.361 12:18:57 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:20:56.361 12:18:57 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:20:56.361 12:18:57 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:56.361 12:18:57 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:56.361 12:18:57 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:20:56.361 12:18:57 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:56.361 12:18:57 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:56.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:56.361 --rc genhtml_branch_coverage=1 00:20:56.361 --rc genhtml_function_coverage=1 00:20:56.361 --rc genhtml_legend=1 00:20:56.361 --rc geninfo_all_blocks=1 00:20:56.361 --rc geninfo_unexecuted_blocks=1 00:20:56.361 00:20:56.361 ' 00:20:56.361 12:18:57 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:56.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:56.361 --rc genhtml_branch_coverage=1 00:20:56.361 --rc genhtml_function_coverage=1 00:20:56.361 --rc genhtml_legend=1 00:20:56.361 --rc geninfo_all_blocks=1 00:20:56.361 --rc geninfo_unexecuted_blocks=1 00:20:56.361 00:20:56.361 ' 00:20:56.361 12:18:57 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:56.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:56.361 --rc genhtml_branch_coverage=1 00:20:56.361 --rc genhtml_function_coverage=1 00:20:56.361 --rc genhtml_legend=1 00:20:56.361 --rc geninfo_all_blocks=1 00:20:56.361 --rc geninfo_unexecuted_blocks=1 00:20:56.361 00:20:56.361 ' 00:20:56.361 12:18:57 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:56.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:56.361 --rc genhtml_branch_coverage=1 00:20:56.361 --rc genhtml_function_coverage=1 00:20:56.361 --rc genhtml_legend=1 00:20:56.361 --rc geninfo_all_blocks=1 00:20:56.361 --rc geninfo_unexecuted_blocks=1 00:20:56.361 00:20:56.361 ' 00:20:56.361 12:18:57 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:56.361 12:18:57 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:20:56.361 12:18:57 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:56.361 12:18:57 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:56.361 12:18:57 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:56.361 12:18:57 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:56.361 12:18:57 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:56.619 12:18:57 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:56.619 12:18:57 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:56.619 12:18:57 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:56.619 12:18:57 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:56.619 12:18:57 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:56.619 12:18:57 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:56.619 12:18:57 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:56.619 12:18:57 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:56.619 12:18:57 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:56.619 12:18:57 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:56.619 12:18:57 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:56.619 12:18:57 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:56.619 12:18:57 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:56.619 12:18:57 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:56.619 12:18:57 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:56.619 12:18:57 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:56.619 12:18:57 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:56.619 12:18:57 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:56.619 12:18:57 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:56.619 12:18:57 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:56.619 12:18:57 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:56.619 12:18:57 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:56.619 12:18:57 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:56.619 12:18:57 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:20:56.619 12:18:57 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.L2nVUbJJPt 00:20:56.619 12:18:57 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:20:56.619 12:18:57 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:20:56.619 12:18:57 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:20:56.619 12:18:57 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:20:56.619 12:18:57 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:20:56.619 12:18:57 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:20:56.619 12:18:57 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:20:56.619 12:18:57 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:20:56.619 12:18:57 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=77122 00:20:56.619 12:18:57 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 77122 00:20:56.619 12:18:57 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:56.619 12:18:57 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 77122 ']' 00:20:56.619 12:18:57 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:56.619 12:18:57 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:56.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:56.619 12:18:57 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:56.619 12:18:57 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:56.619 12:18:57 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:20:56.619 [2024-11-25 12:18:57.519332] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:20:56.619 [2024-11-25 12:18:57.519431] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77122 ] 00:20:56.619 [2024-11-25 12:18:57.674297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.876 [2024-11-25 12:18:57.774402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:57.441 12:18:58 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:57.441 12:18:58 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:20:57.441 12:18:58 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:57.441 12:18:58 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:20:57.441 12:18:58 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:57.441 12:18:58 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:20:57.441 12:18:58 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:20:57.441 12:18:58 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:57.699 12:18:58 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:57.699 12:18:58 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:20:57.699 12:18:58 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:57.700 12:18:58 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:20:57.700 12:18:58 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:57.700 12:18:58 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:57.700 12:18:58 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:57.700 12:18:58 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:57.958 12:18:58 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:57.958 { 00:20:57.958 "name": "nvme0n1", 00:20:57.958 "aliases": [ 00:20:57.958 "24c20677-2b26-4c93-8880-5b04d1204166" 00:20:57.958 ], 00:20:57.958 "product_name": "NVMe disk", 00:20:57.958 "block_size": 4096, 00:20:57.958 "num_blocks": 1310720, 00:20:57.958 "uuid": "24c20677-2b26-4c93-8880-5b04d1204166", 00:20:57.958 "numa_id": -1, 00:20:57.958 "assigned_rate_limits": { 00:20:57.958 "rw_ios_per_sec": 0, 00:20:57.958 "rw_mbytes_per_sec": 0, 00:20:57.958 "r_mbytes_per_sec": 0, 00:20:57.958 "w_mbytes_per_sec": 0 00:20:57.958 }, 00:20:57.958 "claimed": true, 00:20:57.958 "claim_type": "read_many_write_one", 00:20:57.958 "zoned": false, 00:20:57.958 "supported_io_types": { 00:20:57.958 "read": true, 00:20:57.958 "write": true, 00:20:57.958 "unmap": true, 00:20:57.958 "flush": true, 00:20:57.958 "reset": true, 00:20:57.958 "nvme_admin": true, 00:20:57.958 "nvme_io": true, 00:20:57.958 "nvme_io_md": false, 00:20:57.958 "write_zeroes": true, 00:20:57.958 "zcopy": false, 00:20:57.958 "get_zone_info": false, 00:20:57.958 "zone_management": false, 00:20:57.958 "zone_append": false, 00:20:57.958 "compare": true, 00:20:57.958 "compare_and_write": false, 00:20:57.958 "abort": true, 00:20:57.958 "seek_hole": false, 00:20:57.958 "seek_data": false, 00:20:57.958 "copy": true, 00:20:57.958 "nvme_iov_md": false 00:20:57.958 }, 00:20:57.958 "driver_specific": { 00:20:57.958 "nvme": [ 00:20:57.958 { 00:20:57.958 "pci_address": "0000:00:11.0", 00:20:57.958 "trid": { 00:20:57.958 "trtype": "PCIe", 00:20:57.958 "traddr": "0000:00:11.0" 00:20:57.958 }, 00:20:57.958 "ctrlr_data": { 00:20:57.958 "cntlid": 0, 00:20:57.958 "vendor_id": "0x1b36", 00:20:57.958 "model_number": "QEMU NVMe Ctrl", 00:20:57.958 "serial_number": "12341", 00:20:57.958 "firmware_revision": "8.0.0", 00:20:57.958 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:57.958 "oacs": { 00:20:57.958 "security": 0, 00:20:57.958 "format": 1, 00:20:57.958 "firmware": 0, 00:20:57.958 "ns_manage": 1 00:20:57.958 }, 00:20:57.958 "multi_ctrlr": false, 00:20:57.958 "ana_reporting": false 00:20:57.958 }, 00:20:57.958 "vs": { 00:20:57.958 "nvme_version": "1.4" 00:20:57.958 }, 00:20:57.958 "ns_data": { 00:20:57.958 "id": 1, 00:20:57.958 "can_share": false 00:20:57.958 } 00:20:57.958 } 00:20:57.958 ], 00:20:57.958 "mp_policy": "active_passive" 00:20:57.958 } 00:20:57.958 } 00:20:57.958 ]' 00:20:57.958 12:18:58 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:57.958 12:18:58 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:57.958 12:18:58 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:57.958 12:18:58 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:20:57.958 12:18:58 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:20:57.958 12:18:58 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:20:57.958 12:18:58 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:20:57.958 12:18:58 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:57.958 12:18:58 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:20:57.958 12:18:58 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:57.959 12:18:58 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:58.216 12:18:59 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=dfdd407b-cc17-4a91-9380-be1d66e1a5bc 00:20:58.216 12:18:59 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:20:58.216 12:18:59 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u dfdd407b-cc17-4a91-9380-be1d66e1a5bc 00:20:58.473 12:18:59 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:58.473 12:18:59 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=f71f7c34-bd1e-41d8-8128-ddfc47421af3 00:20:58.473 12:18:59 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u f71f7c34-bd1e-41d8-8128-ddfc47421af3 00:20:58.730 12:18:59 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=9ee907f9-1c79-4eb7-973f-725d61fc47d1 00:20:58.730 12:18:59 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:20:58.730 12:18:59 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 9ee907f9-1c79-4eb7-973f-725d61fc47d1 00:20:58.730 12:18:59 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:20:58.730 12:18:59 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:58.730 12:18:59 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=9ee907f9-1c79-4eb7-973f-725d61fc47d1 00:20:58.730 12:18:59 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:20:58.730 12:18:59 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 9ee907f9-1c79-4eb7-973f-725d61fc47d1 00:20:58.730 12:18:59 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=9ee907f9-1c79-4eb7-973f-725d61fc47d1 00:20:58.730 12:18:59 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:58.730 12:18:59 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:58.730 12:18:59 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:58.730 12:18:59 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9ee907f9-1c79-4eb7-973f-725d61fc47d1 00:20:58.988 12:18:59 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:58.988 { 00:20:58.988 "name": "9ee907f9-1c79-4eb7-973f-725d61fc47d1", 00:20:58.988 "aliases": [ 00:20:58.988 "lvs/nvme0n1p0" 00:20:58.988 ], 00:20:58.989 "product_name": "Logical Volume", 00:20:58.989 "block_size": 4096, 00:20:58.989 "num_blocks": 26476544, 00:20:58.989 "uuid": "9ee907f9-1c79-4eb7-973f-725d61fc47d1", 00:20:58.989 "assigned_rate_limits": { 00:20:58.989 "rw_ios_per_sec": 0, 00:20:58.989 "rw_mbytes_per_sec": 0, 00:20:58.989 "r_mbytes_per_sec": 0, 00:20:58.989 "w_mbytes_per_sec": 0 00:20:58.989 }, 00:20:58.989 "claimed": false, 00:20:58.989 "zoned": false, 00:20:58.989 "supported_io_types": { 00:20:58.989 "read": true, 00:20:58.989 "write": true, 00:20:58.989 "unmap": true, 00:20:58.989 "flush": false, 00:20:58.989 "reset": true, 00:20:58.989 "nvme_admin": false, 00:20:58.989 "nvme_io": false, 00:20:58.989 "nvme_io_md": false, 00:20:58.989 "write_zeroes": true, 00:20:58.989 "zcopy": false, 00:20:58.989 "get_zone_info": false, 00:20:58.989 "zone_management": false, 00:20:58.989 "zone_append": false, 00:20:58.989 "compare": false, 00:20:58.989 "compare_and_write": false, 00:20:58.989 "abort": false, 00:20:58.989 "seek_hole": true, 00:20:58.989 "seek_data": true, 00:20:58.989 "copy": false, 00:20:58.989 "nvme_iov_md": false 00:20:58.989 }, 00:20:58.989 "driver_specific": { 00:20:58.989 "lvol": { 00:20:58.989 "lvol_store_uuid": "f71f7c34-bd1e-41d8-8128-ddfc47421af3", 00:20:58.989 "base_bdev": "nvme0n1", 00:20:58.989 "thin_provision": true, 00:20:58.989 "num_allocated_clusters": 0, 00:20:58.989 "snapshot": false, 00:20:58.989 "clone": false, 00:20:58.989 "esnap_clone": false 00:20:58.989 } 00:20:58.989 } 00:20:58.989 } 00:20:58.989 ]' 00:20:58.989 12:18:59 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:58.989 12:18:59 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:58.989 12:18:59 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:58.989 12:18:59 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:58.989 12:18:59 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:58.989 12:18:59 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:20:58.989 12:18:59 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:20:58.989 12:18:59 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:20:58.989 12:18:59 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:59.247 12:19:00 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:59.247 12:19:00 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:59.247 12:19:00 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 9ee907f9-1c79-4eb7-973f-725d61fc47d1 00:20:59.247 12:19:00 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=9ee907f9-1c79-4eb7-973f-725d61fc47d1 00:20:59.247 12:19:00 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:59.247 12:19:00 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:59.247 12:19:00 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:59.247 12:19:00 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9ee907f9-1c79-4eb7-973f-725d61fc47d1 00:20:59.506 12:19:00 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:59.506 { 00:20:59.506 "name": "9ee907f9-1c79-4eb7-973f-725d61fc47d1", 00:20:59.506 "aliases": [ 00:20:59.506 "lvs/nvme0n1p0" 00:20:59.506 ], 00:20:59.506 "product_name": "Logical Volume", 00:20:59.506 "block_size": 4096, 00:20:59.506 "num_blocks": 26476544, 00:20:59.506 "uuid": "9ee907f9-1c79-4eb7-973f-725d61fc47d1", 00:20:59.506 "assigned_rate_limits": { 00:20:59.506 "rw_ios_per_sec": 0, 00:20:59.506 "rw_mbytes_per_sec": 0, 00:20:59.506 "r_mbytes_per_sec": 0, 00:20:59.506 "w_mbytes_per_sec": 0 00:20:59.506 }, 00:20:59.506 "claimed": false, 00:20:59.506 "zoned": false, 00:20:59.506 "supported_io_types": { 00:20:59.506 "read": true, 00:20:59.506 "write": true, 00:20:59.506 "unmap": true, 00:20:59.506 "flush": false, 00:20:59.506 "reset": true, 00:20:59.506 "nvme_admin": false, 00:20:59.506 "nvme_io": false, 00:20:59.506 "nvme_io_md": false, 00:20:59.506 "write_zeroes": true, 00:20:59.506 "zcopy": false, 00:20:59.506 "get_zone_info": false, 00:20:59.506 "zone_management": false, 00:20:59.506 "zone_append": false, 00:20:59.506 "compare": false, 00:20:59.506 "compare_and_write": false, 00:20:59.506 "abort": false, 00:20:59.506 "seek_hole": true, 00:20:59.506 "seek_data": true, 00:20:59.506 "copy": false, 00:20:59.506 "nvme_iov_md": false 00:20:59.506 }, 00:20:59.506 "driver_specific": { 00:20:59.506 "lvol": { 00:20:59.506 "lvol_store_uuid": "f71f7c34-bd1e-41d8-8128-ddfc47421af3", 00:20:59.506 "base_bdev": "nvme0n1", 00:20:59.506 "thin_provision": true, 00:20:59.506 "num_allocated_clusters": 0, 00:20:59.506 "snapshot": false, 00:20:59.506 "clone": false, 00:20:59.506 "esnap_clone": false 00:20:59.506 } 00:20:59.506 } 00:20:59.506 } 00:20:59.506 ]' 00:20:59.506 12:19:00 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:59.506 12:19:00 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:59.506 12:19:00 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:59.506 12:19:00 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:59.506 12:19:00 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:59.506 12:19:00 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:20:59.506 12:19:00 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:20:59.506 12:19:00 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:59.764 12:19:00 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:20:59.764 12:19:00 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 9ee907f9-1c79-4eb7-973f-725d61fc47d1 00:20:59.764 12:19:00 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=9ee907f9-1c79-4eb7-973f-725d61fc47d1 00:20:59.764 12:19:00 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:59.764 12:19:00 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:59.764 12:19:00 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:59.764 12:19:00 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9ee907f9-1c79-4eb7-973f-725d61fc47d1 00:21:00.021 12:19:00 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:00.021 { 00:21:00.021 "name": "9ee907f9-1c79-4eb7-973f-725d61fc47d1", 00:21:00.021 "aliases": [ 00:21:00.021 "lvs/nvme0n1p0" 00:21:00.021 ], 00:21:00.021 "product_name": "Logical Volume", 00:21:00.021 "block_size": 4096, 00:21:00.021 "num_blocks": 26476544, 00:21:00.021 "uuid": "9ee907f9-1c79-4eb7-973f-725d61fc47d1", 00:21:00.021 "assigned_rate_limits": { 00:21:00.021 "rw_ios_per_sec": 0, 00:21:00.021 "rw_mbytes_per_sec": 0, 00:21:00.021 "r_mbytes_per_sec": 0, 00:21:00.021 "w_mbytes_per_sec": 0 00:21:00.021 }, 00:21:00.021 "claimed": false, 00:21:00.021 "zoned": false, 00:21:00.021 "supported_io_types": { 00:21:00.021 "read": true, 00:21:00.021 "write": true, 00:21:00.021 "unmap": true, 00:21:00.021 "flush": false, 00:21:00.021 "reset": true, 00:21:00.021 "nvme_admin": false, 00:21:00.021 "nvme_io": false, 00:21:00.021 "nvme_io_md": false, 00:21:00.021 "write_zeroes": true, 00:21:00.021 "zcopy": false, 00:21:00.021 "get_zone_info": false, 00:21:00.021 "zone_management": false, 00:21:00.021 "zone_append": false, 00:21:00.021 "compare": false, 00:21:00.021 "compare_and_write": false, 00:21:00.021 "abort": false, 00:21:00.021 "seek_hole": true, 00:21:00.021 "seek_data": true, 00:21:00.021 "copy": false, 00:21:00.021 "nvme_iov_md": false 00:21:00.021 }, 00:21:00.021 "driver_specific": { 00:21:00.021 "lvol": { 00:21:00.021 "lvol_store_uuid": "f71f7c34-bd1e-41d8-8128-ddfc47421af3", 00:21:00.021 "base_bdev": "nvme0n1", 00:21:00.021 "thin_provision": true, 00:21:00.021 "num_allocated_clusters": 0, 00:21:00.021 "snapshot": false, 00:21:00.021 "clone": false, 00:21:00.021 "esnap_clone": false 00:21:00.021 } 00:21:00.021 } 00:21:00.021 } 00:21:00.021 ]' 00:21:00.021 12:19:00 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:00.021 12:19:00 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:21:00.021 12:19:00 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:00.021 12:19:01 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:00.021 12:19:01 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:00.021 12:19:01 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:21:00.021 12:19:01 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:21:00.021 12:19:01 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 9ee907f9-1c79-4eb7-973f-725d61fc47d1 --l2p_dram_limit 10' 00:21:00.021 12:19:01 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:21:00.021 12:19:01 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:21:00.021 12:19:01 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:21:00.021 12:19:01 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:21:00.021 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:21:00.021 12:19:01 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 9ee907f9-1c79-4eb7-973f-725d61fc47d1 --l2p_dram_limit 10 -c nvc0n1p0 00:21:00.278 [2024-11-25 12:19:01.212152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.278 [2024-11-25 12:19:01.212199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:00.278 [2024-11-25 12:19:01.212214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:00.278 [2024-11-25 12:19:01.212221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.278 [2024-11-25 12:19:01.212267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.278 [2024-11-25 12:19:01.212274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:00.278 [2024-11-25 12:19:01.212282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:21:00.278 [2024-11-25 12:19:01.212288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.278 [2024-11-25 12:19:01.212308] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:00.278 [2024-11-25 12:19:01.212883] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:00.278 [2024-11-25 12:19:01.212899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.278 [2024-11-25 12:19:01.212905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:00.278 [2024-11-25 12:19:01.212913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.597 ms 00:21:00.278 [2024-11-25 12:19:01.212919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.278 [2024-11-25 12:19:01.212964] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID f81f1b95-15f5-4eae-9c10-92a99a1dcc63 00:21:00.278 [2024-11-25 12:19:01.213957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.278 [2024-11-25 12:19:01.213981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:00.278 [2024-11-25 12:19:01.213989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:21:00.278 [2024-11-25 12:19:01.213996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.278 [2024-11-25 12:19:01.218933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.278 [2024-11-25 12:19:01.218969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:00.278 [2024-11-25 12:19:01.218979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.876 ms 00:21:00.278 [2024-11-25 12:19:01.218988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.278 [2024-11-25 12:19:01.219060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.278 [2024-11-25 12:19:01.219069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:00.278 [2024-11-25 12:19:01.219076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:21:00.278 [2024-11-25 12:19:01.219085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.278 [2024-11-25 12:19:01.219130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.278 [2024-11-25 12:19:01.219139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:00.278 [2024-11-25 12:19:01.219145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:00.278 [2024-11-25 12:19:01.219154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.278 [2024-11-25 12:19:01.219173] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:00.278 [2024-11-25 12:19:01.222203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.278 [2024-11-25 12:19:01.222230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:00.278 [2024-11-25 12:19:01.222239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.033 ms 00:21:00.278 [2024-11-25 12:19:01.222245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.278 [2024-11-25 12:19:01.222274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.278 [2024-11-25 12:19:01.222281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:00.278 [2024-11-25 12:19:01.222289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:00.278 [2024-11-25 12:19:01.222294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.278 [2024-11-25 12:19:01.222310] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:00.278 [2024-11-25 12:19:01.222420] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:00.278 [2024-11-25 12:19:01.222432] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:00.278 [2024-11-25 12:19:01.222440] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:00.278 [2024-11-25 12:19:01.222450] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:00.278 [2024-11-25 12:19:01.222456] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:00.278 [2024-11-25 12:19:01.222464] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:00.278 [2024-11-25 12:19:01.222470] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:00.278 [2024-11-25 12:19:01.222479] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:00.278 [2024-11-25 12:19:01.222484] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:00.278 [2024-11-25 12:19:01.222491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.278 [2024-11-25 12:19:01.222497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:00.278 [2024-11-25 12:19:01.222504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.183 ms 00:21:00.278 [2024-11-25 12:19:01.222515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.278 [2024-11-25 12:19:01.222582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.278 [2024-11-25 12:19:01.222588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:00.278 [2024-11-25 12:19:01.222596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:21:00.278 [2024-11-25 12:19:01.222601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.278 [2024-11-25 12:19:01.222680] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:00.278 [2024-11-25 12:19:01.222687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:00.278 [2024-11-25 12:19:01.222694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:00.278 [2024-11-25 12:19:01.222700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:00.278 [2024-11-25 12:19:01.222707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:00.278 [2024-11-25 12:19:01.222712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:00.278 [2024-11-25 12:19:01.222719] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:00.278 [2024-11-25 12:19:01.222724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:00.279 [2024-11-25 12:19:01.222731] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:00.279 [2024-11-25 12:19:01.222736] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:00.279 [2024-11-25 12:19:01.222743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:00.279 [2024-11-25 12:19:01.222749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:00.279 [2024-11-25 12:19:01.222755] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:00.279 [2024-11-25 12:19:01.222761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:00.279 [2024-11-25 12:19:01.222767] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:00.279 [2024-11-25 12:19:01.222773] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:00.279 [2024-11-25 12:19:01.222781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:00.279 [2024-11-25 12:19:01.222787] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:00.279 [2024-11-25 12:19:01.222793] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:00.279 [2024-11-25 12:19:01.222799] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:00.279 [2024-11-25 12:19:01.222806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:00.279 [2024-11-25 12:19:01.222811] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:00.279 [2024-11-25 12:19:01.222818] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:00.279 [2024-11-25 12:19:01.222823] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:00.279 [2024-11-25 12:19:01.222830] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:00.279 [2024-11-25 12:19:01.222835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:00.279 [2024-11-25 12:19:01.222841] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:00.279 [2024-11-25 12:19:01.222846] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:00.279 [2024-11-25 12:19:01.222852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:00.279 [2024-11-25 12:19:01.222858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:00.279 [2024-11-25 12:19:01.222864] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:00.279 [2024-11-25 12:19:01.222869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:00.279 [2024-11-25 12:19:01.222876] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:00.279 [2024-11-25 12:19:01.222881] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:00.279 [2024-11-25 12:19:01.222888] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:00.279 [2024-11-25 12:19:01.222893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:00.279 [2024-11-25 12:19:01.222899] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:00.279 [2024-11-25 12:19:01.222904] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:00.279 [2024-11-25 12:19:01.222910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:00.279 [2024-11-25 12:19:01.222916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:00.279 [2024-11-25 12:19:01.222922] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:00.279 [2024-11-25 12:19:01.222927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:00.279 [2024-11-25 12:19:01.222933] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:00.279 [2024-11-25 12:19:01.222938] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:00.279 [2024-11-25 12:19:01.222959] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:00.279 [2024-11-25 12:19:01.222965] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:00.279 [2024-11-25 12:19:01.222972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:00.279 [2024-11-25 12:19:01.222981] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:00.279 [2024-11-25 12:19:01.222991] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:00.279 [2024-11-25 12:19:01.222996] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:00.279 [2024-11-25 12:19:01.223003] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:00.279 [2024-11-25 12:19:01.223008] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:00.279 [2024-11-25 12:19:01.223014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:00.279 [2024-11-25 12:19:01.223022] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:00.279 [2024-11-25 12:19:01.223031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:00.279 [2024-11-25 12:19:01.223039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:00.279 [2024-11-25 12:19:01.223046] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:00.279 [2024-11-25 12:19:01.223051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:00.279 [2024-11-25 12:19:01.223059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:00.279 [2024-11-25 12:19:01.223065] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:00.279 [2024-11-25 12:19:01.223071] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:00.279 [2024-11-25 12:19:01.223077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:00.279 [2024-11-25 12:19:01.223083] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:00.279 [2024-11-25 12:19:01.223089] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:00.279 [2024-11-25 12:19:01.223097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:00.279 [2024-11-25 12:19:01.223102] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:00.279 [2024-11-25 12:19:01.223108] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:00.279 [2024-11-25 12:19:01.223114] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:00.279 [2024-11-25 12:19:01.223121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:00.279 [2024-11-25 12:19:01.223126] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:00.279 [2024-11-25 12:19:01.223135] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:00.279 [2024-11-25 12:19:01.223141] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:00.279 [2024-11-25 12:19:01.223148] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:00.279 [2024-11-25 12:19:01.223153] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:00.279 [2024-11-25 12:19:01.223160] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:00.279 [2024-11-25 12:19:01.223165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.279 [2024-11-25 12:19:01.223172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:00.279 [2024-11-25 12:19:01.223178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.540 ms 00:21:00.279 [2024-11-25 12:19:01.223185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.279 [2024-11-25 12:19:01.223230] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:21:00.279 [2024-11-25 12:19:01.223240] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:21:02.178 [2024-11-25 12:19:03.189053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.178 [2024-11-25 12:19:03.189118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:02.178 [2024-11-25 12:19:03.189133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1965.812 ms 00:21:02.178 [2024-11-25 12:19:03.189143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.178 [2024-11-25 12:19:03.214978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.178 [2024-11-25 12:19:03.215030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:02.178 [2024-11-25 12:19:03.215043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.620 ms 00:21:02.178 [2024-11-25 12:19:03.215052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.178 [2024-11-25 12:19:03.215200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.178 [2024-11-25 12:19:03.215212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:02.178 [2024-11-25 12:19:03.215221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:21:02.178 [2024-11-25 12:19:03.215231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.178 [2024-11-25 12:19:03.246170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.178 [2024-11-25 12:19:03.246223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:02.178 [2024-11-25 12:19:03.246235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.886 ms 00:21:02.178 [2024-11-25 12:19:03.246244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.178 [2024-11-25 12:19:03.246285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.178 [2024-11-25 12:19:03.246299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:02.178 [2024-11-25 12:19:03.246307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:02.178 [2024-11-25 12:19:03.246316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.178 [2024-11-25 12:19:03.246687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.178 [2024-11-25 12:19:03.246706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:02.178 [2024-11-25 12:19:03.246714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:21:02.178 [2024-11-25 12:19:03.246724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.178 [2024-11-25 12:19:03.246841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.178 [2024-11-25 12:19:03.246850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:02.178 [2024-11-25 12:19:03.246860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:21:02.178 [2024-11-25 12:19:03.246872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.436 [2024-11-25 12:19:03.260743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.436 [2024-11-25 12:19:03.260916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:02.436 [2024-11-25 12:19:03.260932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.854 ms 00:21:02.436 [2024-11-25 12:19:03.260941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.436 [2024-11-25 12:19:03.272217] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:02.436 [2024-11-25 12:19:03.274828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.436 [2024-11-25 12:19:03.274857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:02.436 [2024-11-25 12:19:03.274870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.787 ms 00:21:02.436 [2024-11-25 12:19:03.274877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.436 [2024-11-25 12:19:03.341669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.436 [2024-11-25 12:19:03.341728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:02.436 [2024-11-25 12:19:03.341745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.759 ms 00:21:02.436 [2024-11-25 12:19:03.341754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.436 [2024-11-25 12:19:03.341964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.436 [2024-11-25 12:19:03.341979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:02.436 [2024-11-25 12:19:03.341991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.149 ms 00:21:02.436 [2024-11-25 12:19:03.341999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.436 [2024-11-25 12:19:03.365023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.436 [2024-11-25 12:19:03.365067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:02.436 [2024-11-25 12:19:03.365081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.971 ms 00:21:02.436 [2024-11-25 12:19:03.365089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.436 [2024-11-25 12:19:03.387536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.436 [2024-11-25 12:19:03.387691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:02.436 [2024-11-25 12:19:03.387714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.404 ms 00:21:02.436 [2024-11-25 12:19:03.387721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.436 [2024-11-25 12:19:03.388305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.436 [2024-11-25 12:19:03.388323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:02.436 [2024-11-25 12:19:03.388334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.549 ms 00:21:02.436 [2024-11-25 12:19:03.388342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.436 [2024-11-25 12:19:03.453809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.436 [2024-11-25 12:19:03.453867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:02.436 [2024-11-25 12:19:03.453885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.421 ms 00:21:02.436 [2024-11-25 12:19:03.453895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.436 [2024-11-25 12:19:03.478167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.436 [2024-11-25 12:19:03.478218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:02.436 [2024-11-25 12:19:03.478232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.170 ms 00:21:02.436 [2024-11-25 12:19:03.478239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.436 [2024-11-25 12:19:03.502041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.436 [2024-11-25 12:19:03.502087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:21:02.436 [2024-11-25 12:19:03.502100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.754 ms 00:21:02.436 [2024-11-25 12:19:03.502107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.695 [2024-11-25 12:19:03.525291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.695 [2024-11-25 12:19:03.525545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:02.695 [2024-11-25 12:19:03.525566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.140 ms 00:21:02.695 [2024-11-25 12:19:03.525574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.695 [2024-11-25 12:19:03.525614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.695 [2024-11-25 12:19:03.525623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:02.695 [2024-11-25 12:19:03.525635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:02.695 [2024-11-25 12:19:03.525642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.695 [2024-11-25 12:19:03.525734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.695 [2024-11-25 12:19:03.525745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:02.695 [2024-11-25 12:19:03.525757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:21:02.695 [2024-11-25 12:19:03.525765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.695 [2024-11-25 12:19:03.526627] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2314.069 ms, result 0 00:21:02.695 { 00:21:02.695 "name": "ftl0", 00:21:02.695 "uuid": "f81f1b95-15f5-4eae-9c10-92a99a1dcc63" 00:21:02.695 } 00:21:02.695 12:19:03 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:21:02.695 12:19:03 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:21:02.953 12:19:03 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:21:02.953 12:19:03 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:21:02.953 [2024-11-25 12:19:04.030453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.953 [2024-11-25 12:19:04.030510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:02.953 [2024-11-25 12:19:04.030524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:02.953 [2024-11-25 12:19:04.030538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.953 [2024-11-25 12:19:04.030561] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:03.213 [2024-11-25 12:19:04.033170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.213 [2024-11-25 12:19:04.033210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:03.213 [2024-11-25 12:19:04.033223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.588 ms 00:21:03.213 [2024-11-25 12:19:04.033231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.213 [2024-11-25 12:19:04.033517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.213 [2024-11-25 12:19:04.033532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:03.213 [2024-11-25 12:19:04.033545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:21:03.213 [2024-11-25 12:19:04.033553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.213 [2024-11-25 12:19:04.036798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.213 [2024-11-25 12:19:04.036926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:03.213 [2024-11-25 12:19:04.036944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.228 ms 00:21:03.213 [2024-11-25 12:19:04.036965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.213 [2024-11-25 12:19:04.043223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.213 [2024-11-25 12:19:04.043336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:03.213 [2024-11-25 12:19:04.043356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.234 ms 00:21:03.213 [2024-11-25 12:19:04.043364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.213 [2024-11-25 12:19:04.067073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.213 [2024-11-25 12:19:04.067110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:03.213 [2024-11-25 12:19:04.067123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.633 ms 00:21:03.213 [2024-11-25 12:19:04.067130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.213 [2024-11-25 12:19:04.081638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.213 [2024-11-25 12:19:04.081678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:03.213 [2024-11-25 12:19:04.081693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.463 ms 00:21:03.213 [2024-11-25 12:19:04.081702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.213 [2024-11-25 12:19:04.081855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.213 [2024-11-25 12:19:04.081866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:03.213 [2024-11-25 12:19:04.081876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:21:03.213 [2024-11-25 12:19:04.081884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.213 [2024-11-25 12:19:04.104686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.213 [2024-11-25 12:19:04.104720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:03.213 [2024-11-25 12:19:04.104732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.783 ms 00:21:03.213 [2024-11-25 12:19:04.104740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.213 [2024-11-25 12:19:04.127508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.213 [2024-11-25 12:19:04.127544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:03.213 [2024-11-25 12:19:04.127558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.727 ms 00:21:03.213 [2024-11-25 12:19:04.127566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.213 [2024-11-25 12:19:04.149785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.213 [2024-11-25 12:19:04.149827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:03.213 [2024-11-25 12:19:04.149840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.175 ms 00:21:03.213 [2024-11-25 12:19:04.149847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.213 [2024-11-25 12:19:04.172594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.213 [2024-11-25 12:19:04.172640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:03.213 [2024-11-25 12:19:04.172654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.661 ms 00:21:03.213 [2024-11-25 12:19:04.172662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.213 [2024-11-25 12:19:04.172705] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:03.213 [2024-11-25 12:19:04.172721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:03.213 [2024-11-25 12:19:04.172732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:03.213 [2024-11-25 12:19:04.172740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:03.213 [2024-11-25 12:19:04.172749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:03.213 [2024-11-25 12:19:04.172757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:03.213 [2024-11-25 12:19:04.172766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:03.213 [2024-11-25 12:19:04.172773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:03.213 [2024-11-25 12:19:04.172785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:03.213 [2024-11-25 12:19:04.172792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:03.213 [2024-11-25 12:19:04.172801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:03.213 [2024-11-25 12:19:04.172809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:03.213 [2024-11-25 12:19:04.172817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:03.213 [2024-11-25 12:19:04.172825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:03.213 [2024-11-25 12:19:04.172834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:03.213 [2024-11-25 12:19:04.172841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:03.213 [2024-11-25 12:19:04.172850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:03.213 [2024-11-25 12:19:04.172857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:03.213 [2024-11-25 12:19:04.172866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:03.213 [2024-11-25 12:19:04.172874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:03.213 [2024-11-25 12:19:04.172883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:03.213 [2024-11-25 12:19:04.172890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:03.213 [2024-11-25 12:19:04.172900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:03.213 [2024-11-25 12:19:04.172908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:03.213 [2024-11-25 12:19:04.172918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:03.213 [2024-11-25 12:19:04.172925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:03.213 [2024-11-25 12:19:04.172934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:03.213 [2024-11-25 12:19:04.172942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:03.213 [2024-11-25 12:19:04.173025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:03.213 [2024-11-25 12:19:04.173035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:03.213 [2024-11-25 12:19:04.173044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:03.213 [2024-11-25 12:19:04.173051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:03.213 [2024-11-25 12:19:04.173061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:03.213 [2024-11-25 12:19:04.173069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:03.213 [2024-11-25 12:19:04.173078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:03.213 [2024-11-25 12:19:04.173092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:03.213 [2024-11-25 12:19:04.173101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:03.213 [2024-11-25 12:19:04.173108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:03.213 [2024-11-25 12:19:04.173117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:03.213 [2024-11-25 12:19:04.173125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:03.214 [2024-11-25 12:19:04.173136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:03.214 [2024-11-25 12:19:04.173143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:03.214 [2024-11-25 12:19:04.173152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:03.214 [2024-11-25 12:19:04.173159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:03.214 [2024-11-25 12:19:04.173168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:03.214 [2024-11-25 12:19:04.173175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:03.214 [2024-11-25 12:19:04.173184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:03.214 [2024-11-25 12:19:04.173191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:03.214 [2024-11-25 12:19:04.173201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:03.214 [2024-11-25 12:19:04.173209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:03.214 [2024-11-25 12:19:04.173218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:03.214 [2024-11-25 12:19:04.173226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:03.214 [2024-11-25 12:19:04.173235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:03.214 [2024-11-25 12:19:04.173242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:03.214 [2024-11-25 12:19:04.173250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:03.214 [2024-11-25 12:19:04.173258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:03.214 [2024-11-25 12:19:04.173268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:03.214 [2024-11-25 12:19:04.173275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:03.214 [2024-11-25 12:19:04.173290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:03.214 [2024-11-25 12:19:04.173297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:03.214 [2024-11-25 12:19:04.173305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:03.214 [2024-11-25 12:19:04.173312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:03.214 [2024-11-25 12:19:04.173321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:03.214 [2024-11-25 12:19:04.173328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:03.214 [2024-11-25 12:19:04.173337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:03.214 [2024-11-25 12:19:04.173345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:03.214 [2024-11-25 12:19:04.173354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:03.214 [2024-11-25 12:19:04.173361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:03.214 [2024-11-25 12:19:04.173370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:03.214 [2024-11-25 12:19:04.173377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:03.214 [2024-11-25 12:19:04.173386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:03.214 [2024-11-25 12:19:04.173416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:03.214 [2024-11-25 12:19:04.173429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:03.214 [2024-11-25 12:19:04.173436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:03.214 [2024-11-25 12:19:04.173445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:03.214 [2024-11-25 12:19:04.173452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:03.214 [2024-11-25 12:19:04.173462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:03.214 [2024-11-25 12:19:04.173469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:03.214 [2024-11-25 12:19:04.173478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:03.214 [2024-11-25 12:19:04.173485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:03.214 [2024-11-25 12:19:04.173495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:03.214 [2024-11-25 12:19:04.173503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:03.214 [2024-11-25 12:19:04.173513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:03.214 [2024-11-25 12:19:04.173520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:03.214 [2024-11-25 12:19:04.173529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:03.214 [2024-11-25 12:19:04.173536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:03.214 [2024-11-25 12:19:04.173545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:03.214 [2024-11-25 12:19:04.173552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:03.214 [2024-11-25 12:19:04.173562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:03.214 [2024-11-25 12:19:04.173569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:03.214 [2024-11-25 12:19:04.173578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:03.214 [2024-11-25 12:19:04.173585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:03.214 [2024-11-25 12:19:04.173594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:03.214 [2024-11-25 12:19:04.173601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:03.214 [2024-11-25 12:19:04.173610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:03.214 [2024-11-25 12:19:04.173617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:03.214 [2024-11-25 12:19:04.173627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:03.214 [2024-11-25 12:19:04.173635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:03.214 [2024-11-25 12:19:04.173643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:03.214 [2024-11-25 12:19:04.173651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:03.214 [2024-11-25 12:19:04.173661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:03.214 [2024-11-25 12:19:04.173676] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:03.214 [2024-11-25 12:19:04.173688] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f81f1b95-15f5-4eae-9c10-92a99a1dcc63 00:21:03.214 [2024-11-25 12:19:04.173696] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:03.214 [2024-11-25 12:19:04.173706] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:03.214 [2024-11-25 12:19:04.173713] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:03.214 [2024-11-25 12:19:04.173725] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:03.214 [2024-11-25 12:19:04.173732] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:03.214 [2024-11-25 12:19:04.173741] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:03.214 [2024-11-25 12:19:04.173748] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:03.214 [2024-11-25 12:19:04.173757] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:03.214 [2024-11-25 12:19:04.173763] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:03.214 [2024-11-25 12:19:04.173771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.214 [2024-11-25 12:19:04.173779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:03.214 [2024-11-25 12:19:04.173789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.068 ms 00:21:03.214 [2024-11-25 12:19:04.173796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.214 [2024-11-25 12:19:04.186007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.214 [2024-11-25 12:19:04.186043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:03.214 [2024-11-25 12:19:04.186057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.169 ms 00:21:03.214 [2024-11-25 12:19:04.186065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.214 [2024-11-25 12:19:04.186397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.214 [2024-11-25 12:19:04.186410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:03.214 [2024-11-25 12:19:04.186421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:21:03.214 [2024-11-25 12:19:04.186431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.214 [2024-11-25 12:19:04.227441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:03.214 [2024-11-25 12:19:04.227480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:03.214 [2024-11-25 12:19:04.227493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:03.214 [2024-11-25 12:19:04.227500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.214 [2024-11-25 12:19:04.227562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:03.214 [2024-11-25 12:19:04.227571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:03.214 [2024-11-25 12:19:04.227581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:03.214 [2024-11-25 12:19:04.227595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.214 [2024-11-25 12:19:04.227694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:03.215 [2024-11-25 12:19:04.227709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:03.215 [2024-11-25 12:19:04.227719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:03.215 [2024-11-25 12:19:04.227726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.215 [2024-11-25 12:19:04.227769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:03.215 [2024-11-25 12:19:04.227782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:03.215 [2024-11-25 12:19:04.227794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:03.215 [2024-11-25 12:19:04.227802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.473 [2024-11-25 12:19:04.303848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:03.473 [2024-11-25 12:19:04.304072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:03.473 [2024-11-25 12:19:04.304094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:03.473 [2024-11-25 12:19:04.304102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.473 [2024-11-25 12:19:04.367494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:03.473 [2024-11-25 12:19:04.367677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:03.473 [2024-11-25 12:19:04.367701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:03.473 [2024-11-25 12:19:04.367717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.473 [2024-11-25 12:19:04.367812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:03.473 [2024-11-25 12:19:04.367824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:03.473 [2024-11-25 12:19:04.367834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:03.473 [2024-11-25 12:19:04.367842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.473 [2024-11-25 12:19:04.367912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:03.473 [2024-11-25 12:19:04.367924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:03.473 [2024-11-25 12:19:04.367934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:03.473 [2024-11-25 12:19:04.367941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.473 [2024-11-25 12:19:04.368079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:03.473 [2024-11-25 12:19:04.368092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:03.473 [2024-11-25 12:19:04.368106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:03.473 [2024-11-25 12:19:04.368118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.473 [2024-11-25 12:19:04.368162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:03.473 [2024-11-25 12:19:04.368172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:03.474 [2024-11-25 12:19:04.368181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:03.474 [2024-11-25 12:19:04.368188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.474 [2024-11-25 12:19:04.368223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:03.474 [2024-11-25 12:19:04.368233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:03.474 [2024-11-25 12:19:04.368242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:03.474 [2024-11-25 12:19:04.368250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.474 [2024-11-25 12:19:04.368293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:03.474 [2024-11-25 12:19:04.368303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:03.474 [2024-11-25 12:19:04.368312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:03.474 [2024-11-25 12:19:04.368319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.474 [2024-11-25 12:19:04.368440] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 337.955 ms, result 0 00:21:03.474 true 00:21:03.474 12:19:04 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 77122 00:21:03.474 12:19:04 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 77122 ']' 00:21:03.474 12:19:04 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 77122 00:21:03.474 12:19:04 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:21:03.474 12:19:04 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:03.474 12:19:04 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77122 00:21:03.474 killing process with pid 77122 00:21:03.474 12:19:04 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:03.474 12:19:04 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:03.474 12:19:04 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77122' 00:21:03.474 12:19:04 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 77122 00:21:03.474 12:19:04 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 77122 00:21:15.845 12:19:14 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:21:17.742 262144+0 records in 00:21:17.742 262144+0 records out 00:21:17.742 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.93004 s, 273 MB/s 00:21:17.742 12:19:18 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:21:19.719 12:19:20 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:19.719 [2024-11-25 12:19:20.457012] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:21:19.719 [2024-11-25 12:19:20.457115] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77333 ] 00:21:19.719 [2024-11-25 12:19:20.612485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:19.719 [2024-11-25 12:19:20.707006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:19.977 [2024-11-25 12:19:20.960004] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:19.977 [2024-11-25 12:19:20.960064] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:20.291 [2024-11-25 12:19:21.113538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.291 [2024-11-25 12:19:21.113606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:20.291 [2024-11-25 12:19:21.113631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:20.291 [2024-11-25 12:19:21.113644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.291 [2024-11-25 12:19:21.113716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.291 [2024-11-25 12:19:21.113730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:20.291 [2024-11-25 12:19:21.113745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:21:20.291 [2024-11-25 12:19:21.113758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.291 [2024-11-25 12:19:21.113788] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:20.291 [2024-11-25 12:19:21.114899] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:20.291 [2024-11-25 12:19:21.115073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.291 [2024-11-25 12:19:21.115087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:20.291 [2024-11-25 12:19:21.115095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.291 ms 00:21:20.291 [2024-11-25 12:19:21.115103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.291 [2024-11-25 12:19:21.116566] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:20.291 [2024-11-25 12:19:21.128872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.291 [2024-11-25 12:19:21.128917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:20.291 [2024-11-25 12:19:21.128931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.307 ms 00:21:20.291 [2024-11-25 12:19:21.128940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.291 [2024-11-25 12:19:21.129023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.291 [2024-11-25 12:19:21.129033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:20.291 [2024-11-25 12:19:21.129041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:21:20.291 [2024-11-25 12:19:21.129048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.291 [2024-11-25 12:19:21.134295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.291 [2024-11-25 12:19:21.134440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:20.291 [2024-11-25 12:19:21.134456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.183 ms 00:21:20.291 [2024-11-25 12:19:21.134464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.291 [2024-11-25 12:19:21.134547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.291 [2024-11-25 12:19:21.134557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:20.291 [2024-11-25 12:19:21.134565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:21:20.291 [2024-11-25 12:19:21.134573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.291 [2024-11-25 12:19:21.134616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.291 [2024-11-25 12:19:21.134625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:20.291 [2024-11-25 12:19:21.134633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:20.291 [2024-11-25 12:19:21.134640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.291 [2024-11-25 12:19:21.134662] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:20.291 [2024-11-25 12:19:21.137852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.291 [2024-11-25 12:19:21.137980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:20.291 [2024-11-25 12:19:21.137996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.197 ms 00:21:20.291 [2024-11-25 12:19:21.138008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.291 [2024-11-25 12:19:21.138037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.291 [2024-11-25 12:19:21.138046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:20.291 [2024-11-25 12:19:21.138053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:20.291 [2024-11-25 12:19:21.138061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.291 [2024-11-25 12:19:21.138081] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:20.291 [2024-11-25 12:19:21.138098] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:20.291 [2024-11-25 12:19:21.138133] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:20.291 [2024-11-25 12:19:21.138149] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:20.291 [2024-11-25 12:19:21.138253] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:20.291 [2024-11-25 12:19:21.138263] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:20.291 [2024-11-25 12:19:21.138273] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:20.291 [2024-11-25 12:19:21.138283] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:20.291 [2024-11-25 12:19:21.138291] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:20.291 [2024-11-25 12:19:21.138299] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:20.291 [2024-11-25 12:19:21.138306] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:20.292 [2024-11-25 12:19:21.138313] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:20.292 [2024-11-25 12:19:21.138320] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:20.292 [2024-11-25 12:19:21.138330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.292 [2024-11-25 12:19:21.138337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:20.292 [2024-11-25 12:19:21.138345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.252 ms 00:21:20.292 [2024-11-25 12:19:21.138351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.292 [2024-11-25 12:19:21.138437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.292 [2024-11-25 12:19:21.138445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:20.292 [2024-11-25 12:19:21.138452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:21:20.292 [2024-11-25 12:19:21.138459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.292 [2024-11-25 12:19:21.138561] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:20.292 [2024-11-25 12:19:21.138573] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:20.292 [2024-11-25 12:19:21.138582] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:20.292 [2024-11-25 12:19:21.138589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:20.292 [2024-11-25 12:19:21.138596] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:20.292 [2024-11-25 12:19:21.138603] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:20.292 [2024-11-25 12:19:21.138609] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:20.292 [2024-11-25 12:19:21.138616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:20.292 [2024-11-25 12:19:21.138624] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:20.292 [2024-11-25 12:19:21.138630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:20.292 [2024-11-25 12:19:21.138637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:20.292 [2024-11-25 12:19:21.138644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:20.292 [2024-11-25 12:19:21.138650] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:20.292 [2024-11-25 12:19:21.138657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:20.292 [2024-11-25 12:19:21.138664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:20.292 [2024-11-25 12:19:21.138676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:20.292 [2024-11-25 12:19:21.138683] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:20.292 [2024-11-25 12:19:21.138689] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:20.292 [2024-11-25 12:19:21.138696] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:20.292 [2024-11-25 12:19:21.138702] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:20.292 [2024-11-25 12:19:21.138709] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:20.292 [2024-11-25 12:19:21.138715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:20.292 [2024-11-25 12:19:21.138722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:20.292 [2024-11-25 12:19:21.138728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:20.292 [2024-11-25 12:19:21.138734] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:20.292 [2024-11-25 12:19:21.138740] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:20.292 [2024-11-25 12:19:21.138746] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:20.292 [2024-11-25 12:19:21.138753] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:20.292 [2024-11-25 12:19:21.138759] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:20.292 [2024-11-25 12:19:21.138765] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:20.292 [2024-11-25 12:19:21.138772] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:20.292 [2024-11-25 12:19:21.138779] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:20.292 [2024-11-25 12:19:21.138785] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:20.292 [2024-11-25 12:19:21.138791] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:20.292 [2024-11-25 12:19:21.138797] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:20.292 [2024-11-25 12:19:21.138804] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:20.292 [2024-11-25 12:19:21.138810] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:20.292 [2024-11-25 12:19:21.138816] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:20.292 [2024-11-25 12:19:21.138822] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:20.292 [2024-11-25 12:19:21.138829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:20.292 [2024-11-25 12:19:21.138835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:20.292 [2024-11-25 12:19:21.138841] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:20.292 [2024-11-25 12:19:21.138847] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:20.292 [2024-11-25 12:19:21.138854] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:20.292 [2024-11-25 12:19:21.138862] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:20.292 [2024-11-25 12:19:21.138868] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:20.292 [2024-11-25 12:19:21.138876] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:20.292 [2024-11-25 12:19:21.138884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:20.292 [2024-11-25 12:19:21.138890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:20.292 [2024-11-25 12:19:21.138897] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:20.292 [2024-11-25 12:19:21.138903] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:20.292 [2024-11-25 12:19:21.138909] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:20.292 [2024-11-25 12:19:21.138916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:20.292 [2024-11-25 12:19:21.138924] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:20.292 [2024-11-25 12:19:21.138933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:20.292 [2024-11-25 12:19:21.138941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:20.292 [2024-11-25 12:19:21.138959] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:20.292 [2024-11-25 12:19:21.138966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:20.292 [2024-11-25 12:19:21.138973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:20.292 [2024-11-25 12:19:21.138980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:20.292 [2024-11-25 12:19:21.138987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:20.292 [2024-11-25 12:19:21.138994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:20.292 [2024-11-25 12:19:21.139002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:20.292 [2024-11-25 12:19:21.139014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:20.292 [2024-11-25 12:19:21.139021] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:20.292 [2024-11-25 12:19:21.139029] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:20.292 [2024-11-25 12:19:21.139035] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:20.292 [2024-11-25 12:19:21.139042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:20.292 [2024-11-25 12:19:21.139049] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:20.292 [2024-11-25 12:19:21.139056] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:20.292 [2024-11-25 12:19:21.139066] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:20.292 [2024-11-25 12:19:21.139075] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:20.292 [2024-11-25 12:19:21.139083] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:20.292 [2024-11-25 12:19:21.139090] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:20.292 [2024-11-25 12:19:21.139097] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:20.292 [2024-11-25 12:19:21.139104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.292 [2024-11-25 12:19:21.139111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:20.292 [2024-11-25 12:19:21.139118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.612 ms 00:21:20.292 [2024-11-25 12:19:21.139126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.292 [2024-11-25 12:19:21.165294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.292 [2024-11-25 12:19:21.165441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:20.292 [2024-11-25 12:19:21.165457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.100 ms 00:21:20.292 [2024-11-25 12:19:21.165465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.292 [2024-11-25 12:19:21.165562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.292 [2024-11-25 12:19:21.165570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:20.292 [2024-11-25 12:19:21.165578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:21:20.292 [2024-11-25 12:19:21.165585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.292 [2024-11-25 12:19:21.207324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.292 [2024-11-25 12:19:21.207373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:20.292 [2024-11-25 12:19:21.207386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.677 ms 00:21:20.293 [2024-11-25 12:19:21.207394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.293 [2024-11-25 12:19:21.207449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.293 [2024-11-25 12:19:21.207459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:20.293 [2024-11-25 12:19:21.207468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:20.293 [2024-11-25 12:19:21.207478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.293 [2024-11-25 12:19:21.207844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.293 [2024-11-25 12:19:21.207861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:20.293 [2024-11-25 12:19:21.207871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:21:20.293 [2024-11-25 12:19:21.207878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.293 [2024-11-25 12:19:21.208029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.293 [2024-11-25 12:19:21.208040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:20.293 [2024-11-25 12:19:21.208048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:21:20.293 [2024-11-25 12:19:21.208060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.293 [2024-11-25 12:19:21.220900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.293 [2024-11-25 12:19:21.221059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:20.293 [2024-11-25 12:19:21.221079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.821 ms 00:21:20.293 [2024-11-25 12:19:21.221086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.293 [2024-11-25 12:19:21.233585] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:21:20.293 [2024-11-25 12:19:21.233620] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:20.293 [2024-11-25 12:19:21.233633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.293 [2024-11-25 12:19:21.233641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:20.293 [2024-11-25 12:19:21.233649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.444 ms 00:21:20.293 [2024-11-25 12:19:21.233656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.293 [2024-11-25 12:19:21.257497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.293 [2024-11-25 12:19:21.257540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:20.293 [2024-11-25 12:19:21.257558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.802 ms 00:21:20.293 [2024-11-25 12:19:21.257566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.293 [2024-11-25 12:19:21.269021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.293 [2024-11-25 12:19:21.269059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:20.293 [2024-11-25 12:19:21.269069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.403 ms 00:21:20.293 [2024-11-25 12:19:21.269076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.551 [2024-11-25 12:19:21.280442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.551 [2024-11-25 12:19:21.280569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:20.551 [2024-11-25 12:19:21.280585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.333 ms 00:21:20.551 [2024-11-25 12:19:21.280593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.551 [2024-11-25 12:19:21.281217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.551 [2024-11-25 12:19:21.281238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:20.551 [2024-11-25 12:19:21.281247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.537 ms 00:21:20.551 [2024-11-25 12:19:21.281255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.551 [2024-11-25 12:19:21.336733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.551 [2024-11-25 12:19:21.336790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:20.551 [2024-11-25 12:19:21.336803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.457 ms 00:21:20.551 [2024-11-25 12:19:21.336818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.551 [2024-11-25 12:19:21.347583] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:20.551 [2024-11-25 12:19:21.350419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.551 [2024-11-25 12:19:21.350451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:20.551 [2024-11-25 12:19:21.350464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.545 ms 00:21:20.551 [2024-11-25 12:19:21.350474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.551 [2024-11-25 12:19:21.350578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.551 [2024-11-25 12:19:21.350588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:20.551 [2024-11-25 12:19:21.350598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:20.551 [2024-11-25 12:19:21.350605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.551 [2024-11-25 12:19:21.350672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.551 [2024-11-25 12:19:21.350683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:20.551 [2024-11-25 12:19:21.350691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:21:20.551 [2024-11-25 12:19:21.350698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.551 [2024-11-25 12:19:21.350716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.551 [2024-11-25 12:19:21.350724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:20.551 [2024-11-25 12:19:21.350732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:20.551 [2024-11-25 12:19:21.350739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.551 [2024-11-25 12:19:21.350768] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:20.551 [2024-11-25 12:19:21.350778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.551 [2024-11-25 12:19:21.350787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:20.551 [2024-11-25 12:19:21.350795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:20.551 [2024-11-25 12:19:21.350802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.551 [2024-11-25 12:19:21.374235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.551 [2024-11-25 12:19:21.374367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:20.551 [2024-11-25 12:19:21.374426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.416 ms 00:21:20.551 [2024-11-25 12:19:21.374451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.551 [2024-11-25 12:19:21.374625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.551 [2024-11-25 12:19:21.374716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:20.551 [2024-11-25 12:19:21.374770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:21:20.551 [2024-11-25 12:19:21.374825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.551 [2024-11-25 12:19:21.375736] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 261.809 ms, result 0 00:21:21.485  [2024-11-25T12:19:23.496Z] Copying: 47/1024 [MB] (47 MBps) [2024-11-25T12:19:24.453Z] Copying: 91/1024 [MB] (44 MBps) [2024-11-25T12:19:25.426Z] Copying: 136/1024 [MB] (44 MBps) [2024-11-25T12:19:26.799Z] Copying: 180/1024 [MB] (44 MBps) [2024-11-25T12:19:27.732Z] Copying: 226/1024 [MB] (45 MBps) [2024-11-25T12:19:28.663Z] Copying: 270/1024 [MB] (44 MBps) [2024-11-25T12:19:29.594Z] Copying: 314/1024 [MB] (44 MBps) [2024-11-25T12:19:30.617Z] Copying: 359/1024 [MB] (44 MBps) [2024-11-25T12:19:31.550Z] Copying: 402/1024 [MB] (43 MBps) [2024-11-25T12:19:32.483Z] Copying: 446/1024 [MB] (43 MBps) [2024-11-25T12:19:33.415Z] Copying: 489/1024 [MB] (43 MBps) [2024-11-25T12:19:34.788Z] Copying: 529/1024 [MB] (39 MBps) [2024-11-25T12:19:35.391Z] Copying: 574/1024 [MB] (44 MBps) [2024-11-25T12:19:36.763Z] Copying: 618/1024 [MB] (44 MBps) [2024-11-25T12:19:37.696Z] Copying: 663/1024 [MB] (44 MBps) [2024-11-25T12:19:38.629Z] Copying: 707/1024 [MB] (44 MBps) [2024-11-25T12:19:39.562Z] Copying: 752/1024 [MB] (45 MBps) [2024-11-25T12:19:40.495Z] Copying: 797/1024 [MB] (44 MBps) [2024-11-25T12:19:41.448Z] Copying: 842/1024 [MB] (44 MBps) [2024-11-25T12:19:42.821Z] Copying: 886/1024 [MB] (44 MBps) [2024-11-25T12:19:43.754Z] Copying: 935/1024 [MB] (48 MBps) [2024-11-25T12:19:44.689Z] Copying: 980/1024 [MB] (45 MBps) [2024-11-25T12:19:44.689Z] Copying: 1024/1024 [MB] (average 44 MBps)[2024-11-25 12:19:44.333123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.609 [2024-11-25 12:19:44.333171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:43.609 [2024-11-25 12:19:44.333185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:43.609 [2024-11-25 12:19:44.333195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.609 [2024-11-25 12:19:44.333216] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:43.609 [2024-11-25 12:19:44.335832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.609 [2024-11-25 12:19:44.335867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:43.609 [2024-11-25 12:19:44.335879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.600 ms 00:21:43.609 [2024-11-25 12:19:44.335889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.609 [2024-11-25 12:19:44.337334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.609 [2024-11-25 12:19:44.337370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:43.609 [2024-11-25 12:19:44.337380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.418 ms 00:21:43.609 [2024-11-25 12:19:44.337387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.609 [2024-11-25 12:19:44.350931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.609 [2024-11-25 12:19:44.350984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:43.609 [2024-11-25 12:19:44.350996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.526 ms 00:21:43.609 [2024-11-25 12:19:44.351004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.609 [2024-11-25 12:19:44.357168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.609 [2024-11-25 12:19:44.357213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:43.609 [2024-11-25 12:19:44.357223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.134 ms 00:21:43.609 [2024-11-25 12:19:44.357231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.609 [2024-11-25 12:19:44.381392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.609 [2024-11-25 12:19:44.381459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:43.609 [2024-11-25 12:19:44.381473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.107 ms 00:21:43.609 [2024-11-25 12:19:44.381480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.609 [2024-11-25 12:19:44.396195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.609 [2024-11-25 12:19:44.396249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:43.609 [2024-11-25 12:19:44.396262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.666 ms 00:21:43.609 [2024-11-25 12:19:44.396270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.609 [2024-11-25 12:19:44.396420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.609 [2024-11-25 12:19:44.396432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:43.609 [2024-11-25 12:19:44.396449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:21:43.609 [2024-11-25 12:19:44.396457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.609 [2024-11-25 12:19:44.420609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.609 [2024-11-25 12:19:44.420659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:43.609 [2024-11-25 12:19:44.420670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.137 ms 00:21:43.609 [2024-11-25 12:19:44.420679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.609 [2024-11-25 12:19:44.444205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.609 [2024-11-25 12:19:44.444250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:43.609 [2024-11-25 12:19:44.444272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.481 ms 00:21:43.609 [2024-11-25 12:19:44.444280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.609 [2024-11-25 12:19:44.467043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.609 [2024-11-25 12:19:44.467088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:43.609 [2024-11-25 12:19:44.467100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.721 ms 00:21:43.609 [2024-11-25 12:19:44.467107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.609 [2024-11-25 12:19:44.489736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.609 [2024-11-25 12:19:44.489784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:43.609 [2024-11-25 12:19:44.489796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.565 ms 00:21:43.609 [2024-11-25 12:19:44.489804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.609 [2024-11-25 12:19:44.489847] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:43.609 [2024-11-25 12:19:44.489861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:43.609 [2024-11-25 12:19:44.489871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:43.609 [2024-11-25 12:19:44.489879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:43.609 [2024-11-25 12:19:44.489887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:43.609 [2024-11-25 12:19:44.489895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:43.609 [2024-11-25 12:19:44.489902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:43.609 [2024-11-25 12:19:44.489909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:43.609 [2024-11-25 12:19:44.489917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:43.609 [2024-11-25 12:19:44.489925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:43.609 [2024-11-25 12:19:44.489932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:43.609 [2024-11-25 12:19:44.489939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:43.609 [2024-11-25 12:19:44.489963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:43.609 [2024-11-25 12:19:44.489971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:43.609 [2024-11-25 12:19:44.489978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:43.609 [2024-11-25 12:19:44.489986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:43.609 [2024-11-25 12:19:44.490010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:43.610 [2024-11-25 12:19:44.490652] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:43.610 [2024-11-25 12:19:44.490665] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f81f1b95-15f5-4eae-9c10-92a99a1dcc63 00:21:43.610 [2024-11-25 12:19:44.490675] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:43.610 [2024-11-25 12:19:44.490685] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:43.610 [2024-11-25 12:19:44.490693] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:43.610 [2024-11-25 12:19:44.490702] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:43.610 [2024-11-25 12:19:44.490709] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:43.610 [2024-11-25 12:19:44.490717] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:43.611 [2024-11-25 12:19:44.490724] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:43.611 [2024-11-25 12:19:44.490738] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:43.611 [2024-11-25 12:19:44.490744] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:43.611 [2024-11-25 12:19:44.490751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.611 [2024-11-25 12:19:44.490759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:43.611 [2024-11-25 12:19:44.490768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.905 ms 00:21:43.611 [2024-11-25 12:19:44.490775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.611 [2024-11-25 12:19:44.503159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.611 [2024-11-25 12:19:44.503204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:43.611 [2024-11-25 12:19:44.503216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.366 ms 00:21:43.611 [2024-11-25 12:19:44.503224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.611 [2024-11-25 12:19:44.503577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.611 [2024-11-25 12:19:44.503587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:43.611 [2024-11-25 12:19:44.503595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.327 ms 00:21:43.611 [2024-11-25 12:19:44.503602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.611 [2024-11-25 12:19:44.536103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:43.611 [2024-11-25 12:19:44.536156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:43.611 [2024-11-25 12:19:44.536168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:43.611 [2024-11-25 12:19:44.536175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.611 [2024-11-25 12:19:44.536240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:43.611 [2024-11-25 12:19:44.536249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:43.611 [2024-11-25 12:19:44.536257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:43.611 [2024-11-25 12:19:44.536264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.611 [2024-11-25 12:19:44.536328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:43.611 [2024-11-25 12:19:44.536337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:43.611 [2024-11-25 12:19:44.536345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:43.611 [2024-11-25 12:19:44.536353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.611 [2024-11-25 12:19:44.536367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:43.611 [2024-11-25 12:19:44.536375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:43.611 [2024-11-25 12:19:44.536383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:43.611 [2024-11-25 12:19:44.536390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.611 [2024-11-25 12:19:44.614315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:43.611 [2024-11-25 12:19:44.614368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:43.611 [2024-11-25 12:19:44.614380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:43.611 [2024-11-25 12:19:44.614388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.611 [2024-11-25 12:19:44.678834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:43.611 [2024-11-25 12:19:44.678888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:43.611 [2024-11-25 12:19:44.678900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:43.611 [2024-11-25 12:19:44.678908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.611 [2024-11-25 12:19:44.679000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:43.611 [2024-11-25 12:19:44.679020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:43.611 [2024-11-25 12:19:44.679028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:43.611 [2024-11-25 12:19:44.679035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.611 [2024-11-25 12:19:44.679069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:43.611 [2024-11-25 12:19:44.679078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:43.611 [2024-11-25 12:19:44.679086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:43.611 [2024-11-25 12:19:44.679094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.611 [2024-11-25 12:19:44.679176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:43.611 [2024-11-25 12:19:44.679188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:43.611 [2024-11-25 12:19:44.679197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:43.611 [2024-11-25 12:19:44.679204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.611 [2024-11-25 12:19:44.679236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:43.611 [2024-11-25 12:19:44.679246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:43.611 [2024-11-25 12:19:44.679254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:43.611 [2024-11-25 12:19:44.679262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.611 [2024-11-25 12:19:44.679294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:43.611 [2024-11-25 12:19:44.679308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:43.611 [2024-11-25 12:19:44.679318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:43.611 [2024-11-25 12:19:44.679326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.611 [2024-11-25 12:19:44.679365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:43.611 [2024-11-25 12:19:44.679375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:43.611 [2024-11-25 12:19:44.679383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:43.611 [2024-11-25 12:19:44.679391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.611 [2024-11-25 12:19:44.679502] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 346.352 ms, result 0 00:21:46.138 00:21:46.138 00:21:46.138 12:19:47 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:21:46.139 [2024-11-25 12:19:47.134218] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:21:46.139 [2024-11-25 12:19:47.134492] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77599 ] 00:21:46.396 [2024-11-25 12:19:47.287296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:46.396 [2024-11-25 12:19:47.387396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:46.667 [2024-11-25 12:19:47.619897] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:46.667 [2024-11-25 12:19:47.619976] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:46.925 [2024-11-25 12:19:47.771490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.925 [2024-11-25 12:19:47.771651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:46.925 [2024-11-25 12:19:47.771674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:46.925 [2024-11-25 12:19:47.771681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.926 [2024-11-25 12:19:47.771726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.926 [2024-11-25 12:19:47.771735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:46.926 [2024-11-25 12:19:47.771744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:21:46.926 [2024-11-25 12:19:47.771751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.926 [2024-11-25 12:19:47.771767] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:46.926 [2024-11-25 12:19:47.772303] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:46.926 [2024-11-25 12:19:47.772320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.926 [2024-11-25 12:19:47.772327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:46.926 [2024-11-25 12:19:47.772335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.556 ms 00:21:46.926 [2024-11-25 12:19:47.772341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.926 [2024-11-25 12:19:47.773649] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:46.926 [2024-11-25 12:19:47.784281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.926 [2024-11-25 12:19:47.784385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:46.926 [2024-11-25 12:19:47.784460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.633 ms 00:21:46.926 [2024-11-25 12:19:47.784480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.926 [2024-11-25 12:19:47.784533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.926 [2024-11-25 12:19:47.784660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:46.926 [2024-11-25 12:19:47.784680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:21:46.926 [2024-11-25 12:19:47.784696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.926 [2024-11-25 12:19:47.791110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.926 [2024-11-25 12:19:47.791239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:46.926 [2024-11-25 12:19:47.791288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.349 ms 00:21:46.926 [2024-11-25 12:19:47.791307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.926 [2024-11-25 12:19:47.791380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.926 [2024-11-25 12:19:47.791468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:46.926 [2024-11-25 12:19:47.791487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:21:46.926 [2024-11-25 12:19:47.791503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.926 [2024-11-25 12:19:47.791550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.926 [2024-11-25 12:19:47.791569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:46.926 [2024-11-25 12:19:47.791625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:46.926 [2024-11-25 12:19:47.791643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.926 [2024-11-25 12:19:47.791673] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:46.926 [2024-11-25 12:19:47.794847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.926 [2024-11-25 12:19:47.794938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:46.926 [2024-11-25 12:19:47.794996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.180 ms 00:21:46.926 [2024-11-25 12:19:47.795019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.926 [2024-11-25 12:19:47.795060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.926 [2024-11-25 12:19:47.795131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:46.926 [2024-11-25 12:19:47.795157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:46.926 [2024-11-25 12:19:47.795173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.926 [2024-11-25 12:19:47.795205] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:46.926 [2024-11-25 12:19:47.795232] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:46.926 [2024-11-25 12:19:47.795278] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:46.926 [2024-11-25 12:19:47.795312] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:46.926 [2024-11-25 12:19:47.795524] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:46.926 [2024-11-25 12:19:47.795588] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:46.926 [2024-11-25 12:19:47.795646] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:46.926 [2024-11-25 12:19:47.795673] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:46.926 [2024-11-25 12:19:47.795699] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:46.926 [2024-11-25 12:19:47.795761] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:46.926 [2024-11-25 12:19:47.795779] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:46.926 [2024-11-25 12:19:47.795794] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:46.926 [2024-11-25 12:19:47.795809] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:46.926 [2024-11-25 12:19:47.795848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.926 [2024-11-25 12:19:47.795866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:46.926 [2024-11-25 12:19:47.795954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.646 ms 00:21:46.926 [2024-11-25 12:19:47.795973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.926 [2024-11-25 12:19:47.796055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.926 [2024-11-25 12:19:47.796169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:46.926 [2024-11-25 12:19:47.796188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:21:46.926 [2024-11-25 12:19:47.796203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.926 [2024-11-25 12:19:47.796309] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:46.926 [2024-11-25 12:19:47.796335] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:46.926 [2024-11-25 12:19:47.796352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:46.926 [2024-11-25 12:19:47.796394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:46.926 [2024-11-25 12:19:47.796412] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:46.926 [2024-11-25 12:19:47.796427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:46.926 [2024-11-25 12:19:47.796466] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:46.926 [2024-11-25 12:19:47.796483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:46.926 [2024-11-25 12:19:47.796498] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:46.926 [2024-11-25 12:19:47.796530] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:46.926 [2024-11-25 12:19:47.796546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:46.926 [2024-11-25 12:19:47.796561] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:46.926 [2024-11-25 12:19:47.796599] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:46.926 [2024-11-25 12:19:47.796616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:46.926 [2024-11-25 12:19:47.796630] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:46.926 [2024-11-25 12:19:47.796652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:46.926 [2024-11-25 12:19:47.796685] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:46.926 [2024-11-25 12:19:47.796702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:46.926 [2024-11-25 12:19:47.796717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:46.926 [2024-11-25 12:19:47.796732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:46.926 [2024-11-25 12:19:47.796816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:46.926 [2024-11-25 12:19:47.796834] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:46.926 [2024-11-25 12:19:47.796849] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:46.926 [2024-11-25 12:19:47.796864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:46.926 [2024-11-25 12:19:47.796878] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:46.926 [2024-11-25 12:19:47.796892] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:46.926 [2024-11-25 12:19:47.796907] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:46.926 [2024-11-25 12:19:47.796959] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:46.926 [2024-11-25 12:19:47.796978] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:46.926 [2024-11-25 12:19:47.796992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:46.926 [2024-11-25 12:19:47.797007] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:46.926 [2024-11-25 12:19:47.797021] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:46.926 [2024-11-25 12:19:47.797036] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:46.926 [2024-11-25 12:19:47.797050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:46.926 [2024-11-25 12:19:47.797089] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:46.926 [2024-11-25 12:19:47.797143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:46.926 [2024-11-25 12:19:47.797173] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:46.926 [2024-11-25 12:19:47.797190] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:46.926 [2024-11-25 12:19:47.797205] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:46.926 [2024-11-25 12:19:47.797220] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:46.926 [2024-11-25 12:19:47.797234] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:46.926 [2024-11-25 12:19:47.797248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:46.927 [2024-11-25 12:19:47.797290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:46.927 [2024-11-25 12:19:47.797307] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:46.927 [2024-11-25 12:19:47.797323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:46.927 [2024-11-25 12:19:47.797339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:46.927 [2024-11-25 12:19:47.797354] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:46.927 [2024-11-25 12:19:47.797370] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:46.927 [2024-11-25 12:19:47.797385] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:46.927 [2024-11-25 12:19:47.797429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:46.927 [2024-11-25 12:19:47.797448] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:46.927 [2024-11-25 12:19:47.797462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:46.927 [2024-11-25 12:19:47.797477] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:46.927 [2024-11-25 12:19:47.797493] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:46.927 [2024-11-25 12:19:47.797518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:46.927 [2024-11-25 12:19:47.797564] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:46.927 [2024-11-25 12:19:47.797609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:46.927 [2024-11-25 12:19:47.797633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:46.927 [2024-11-25 12:19:47.797673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:46.927 [2024-11-25 12:19:47.797696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:46.927 [2024-11-25 12:19:47.797719] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:46.927 [2024-11-25 12:19:47.797768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:46.927 [2024-11-25 12:19:47.797793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:46.927 [2024-11-25 12:19:47.797815] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:46.927 [2024-11-25 12:19:47.797860] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:46.927 [2024-11-25 12:19:47.797885] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:46.927 [2024-11-25 12:19:47.797934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:46.927 [2024-11-25 12:19:47.797969] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:46.927 [2024-11-25 12:19:47.798016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:46.927 [2024-11-25 12:19:47.798041] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:46.927 [2024-11-25 12:19:47.798068] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:46.927 [2024-11-25 12:19:47.798121] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:46.927 [2024-11-25 12:19:47.798146] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:46.927 [2024-11-25 12:19:47.798168] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:46.927 [2024-11-25 12:19:47.798218] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:46.927 [2024-11-25 12:19:47.798268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.927 [2024-11-25 12:19:47.798285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:46.927 [2024-11-25 12:19:47.798320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.014 ms 00:21:46.927 [2024-11-25 12:19:47.798337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.927 [2024-11-25 12:19:47.823175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.927 [2024-11-25 12:19:47.823285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:46.927 [2024-11-25 12:19:47.823371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.786 ms 00:21:46.927 [2024-11-25 12:19:47.823389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.927 [2024-11-25 12:19:47.823478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.927 [2024-11-25 12:19:47.823522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:46.927 [2024-11-25 12:19:47.823541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:21:46.927 [2024-11-25 12:19:47.823556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.927 [2024-11-25 12:19:47.864235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.927 [2024-11-25 12:19:47.864358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:46.927 [2024-11-25 12:19:47.864421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.592 ms 00:21:46.927 [2024-11-25 12:19:47.864440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.927 [2024-11-25 12:19:47.864484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.927 [2024-11-25 12:19:47.864504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:46.927 [2024-11-25 12:19:47.864521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:46.927 [2024-11-25 12:19:47.864540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.927 [2024-11-25 12:19:47.864990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.927 [2024-11-25 12:19:47.865008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:46.927 [2024-11-25 12:19:47.865016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.387 ms 00:21:46.927 [2024-11-25 12:19:47.865023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.927 [2024-11-25 12:19:47.865139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.927 [2024-11-25 12:19:47.865151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:46.927 [2024-11-25 12:19:47.865158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:21:46.927 [2024-11-25 12:19:47.865169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.927 [2024-11-25 12:19:47.877214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.927 [2024-11-25 12:19:47.877240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:46.927 [2024-11-25 12:19:47.877250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.028 ms 00:21:46.927 [2024-11-25 12:19:47.877257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.927 [2024-11-25 12:19:47.887938] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:46.927 [2024-11-25 12:19:47.887972] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:46.927 [2024-11-25 12:19:47.887982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.927 [2024-11-25 12:19:47.887989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:46.927 [2024-11-25 12:19:47.887997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.647 ms 00:21:46.927 [2024-11-25 12:19:47.888004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.927 [2024-11-25 12:19:47.906922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.927 [2024-11-25 12:19:47.906966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:46.927 [2024-11-25 12:19:47.906975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.886 ms 00:21:46.927 [2024-11-25 12:19:47.906996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.927 [2024-11-25 12:19:47.916003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.927 [2024-11-25 12:19:47.916029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:46.927 [2024-11-25 12:19:47.916038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.977 ms 00:21:46.927 [2024-11-25 12:19:47.916044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.927 [2024-11-25 12:19:47.924760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.927 [2024-11-25 12:19:47.924787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:46.927 [2024-11-25 12:19:47.924796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.691 ms 00:21:46.927 [2024-11-25 12:19:47.924802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.927 [2024-11-25 12:19:47.925305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.927 [2024-11-25 12:19:47.925324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:46.927 [2024-11-25 12:19:47.925332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.445 ms 00:21:46.927 [2024-11-25 12:19:47.925340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.927 [2024-11-25 12:19:47.973575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.927 [2024-11-25 12:19:47.973641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:46.927 [2024-11-25 12:19:47.973659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.216 ms 00:21:46.927 [2024-11-25 12:19:47.973666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.927 [2024-11-25 12:19:47.982249] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:46.927 [2024-11-25 12:19:47.985216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.927 [2024-11-25 12:19:47.985242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:46.927 [2024-11-25 12:19:47.985253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.494 ms 00:21:46.927 [2024-11-25 12:19:47.985261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.927 [2024-11-25 12:19:47.985373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.927 [2024-11-25 12:19:47.985382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:46.927 [2024-11-25 12:19:47.985390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:46.927 [2024-11-25 12:19:47.985399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.927 [2024-11-25 12:19:47.985481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.927 [2024-11-25 12:19:47.985490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:46.928 [2024-11-25 12:19:47.985498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:21:46.928 [2024-11-25 12:19:47.985505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.928 [2024-11-25 12:19:47.985522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.928 [2024-11-25 12:19:47.985529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:46.928 [2024-11-25 12:19:47.985536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:46.928 [2024-11-25 12:19:47.985543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.928 [2024-11-25 12:19:47.985574] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:46.928 [2024-11-25 12:19:47.985585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.928 [2024-11-25 12:19:47.985592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:46.928 [2024-11-25 12:19:47.985598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:46.928 [2024-11-25 12:19:47.985604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.185 [2024-11-25 12:19:48.004568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.185 [2024-11-25 12:19:48.004673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:47.185 [2024-11-25 12:19:48.004717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.948 ms 00:21:47.185 [2024-11-25 12:19:48.004741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.185 [2024-11-25 12:19:48.004815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.185 [2024-11-25 12:19:48.004835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:47.185 [2024-11-25 12:19:48.004851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:21:47.185 [2024-11-25 12:19:48.004887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.185 [2024-11-25 12:19:48.006409] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 234.505 ms, result 0 00:21:48.119  [2024-11-25T12:19:50.571Z] Copying: 48/1024 [MB] (48 MBps) [2024-11-25T12:19:51.505Z] Copying: 101/1024 [MB] (52 MBps) [2024-11-25T12:19:52.438Z] Copying: 156/1024 [MB] (54 MBps) [2024-11-25T12:19:53.407Z] Copying: 204/1024 [MB] (47 MBps) [2024-11-25T12:19:54.341Z] Copying: 252/1024 [MB] (48 MBps) [2024-11-25T12:19:55.273Z] Copying: 296/1024 [MB] (44 MBps) [2024-11-25T12:19:56.205Z] Copying: 343/1024 [MB] (46 MBps) [2024-11-25T12:19:57.579Z] Copying: 389/1024 [MB] (46 MBps) [2024-11-25T12:19:58.515Z] Copying: 436/1024 [MB] (47 MBps) [2024-11-25T12:19:59.504Z] Copying: 483/1024 [MB] (46 MBps) [2024-11-25T12:20:00.462Z] Copying: 530/1024 [MB] (47 MBps) [2024-11-25T12:20:01.394Z] Copying: 574/1024 [MB] (43 MBps) [2024-11-25T12:20:02.327Z] Copying: 624/1024 [MB] (49 MBps) [2024-11-25T12:20:03.260Z] Copying: 674/1024 [MB] (50 MBps) [2024-11-25T12:20:04.193Z] Copying: 723/1024 [MB] (48 MBps) [2024-11-25T12:20:05.569Z] Copying: 769/1024 [MB] (45 MBps) [2024-11-25T12:20:06.503Z] Copying: 816/1024 [MB] (47 MBps) [2024-11-25T12:20:07.441Z] Copying: 865/1024 [MB] (49 MBps) [2024-11-25T12:20:08.378Z] Copying: 913/1024 [MB] (47 MBps) [2024-11-25T12:20:09.348Z] Copying: 961/1024 [MB] (48 MBps) [2024-11-25T12:20:09.915Z] Copying: 1002/1024 [MB] (40 MBps) [2024-11-25T12:20:10.174Z] Copying: 1024/1024 [MB] (average 47 MBps)[2024-11-25 12:20:09.958341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.094 [2024-11-25 12:20:09.958426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:09.094 [2024-11-25 12:20:09.958450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:09.094 [2024-11-25 12:20:09.958465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.094 [2024-11-25 12:20:09.958503] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:09.094 [2024-11-25 12:20:09.964813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.094 [2024-11-25 12:20:09.964876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:09.094 [2024-11-25 12:20:09.964907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.283 ms 00:22:09.094 [2024-11-25 12:20:09.964925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.094 [2024-11-25 12:20:09.965350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.094 [2024-11-25 12:20:09.965370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:09.094 [2024-11-25 12:20:09.965386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.366 ms 00:22:09.094 [2024-11-25 12:20:09.965401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.094 [2024-11-25 12:20:09.969173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.094 [2024-11-25 12:20:09.969204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:09.094 [2024-11-25 12:20:09.969215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.748 ms 00:22:09.094 [2024-11-25 12:20:09.969224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.094 [2024-11-25 12:20:09.975544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.094 [2024-11-25 12:20:09.975725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:09.094 [2024-11-25 12:20:09.975743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.295 ms 00:22:09.094 [2024-11-25 12:20:09.975751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.094 [2024-11-25 12:20:10.000396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.094 [2024-11-25 12:20:10.000435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:09.094 [2024-11-25 12:20:10.000448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.589 ms 00:22:09.094 [2024-11-25 12:20:10.000456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.094 [2024-11-25 12:20:10.014860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.094 [2024-11-25 12:20:10.015035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:09.094 [2024-11-25 12:20:10.015055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.365 ms 00:22:09.094 [2024-11-25 12:20:10.015064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.094 [2024-11-25 12:20:10.015188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.094 [2024-11-25 12:20:10.015204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:09.094 [2024-11-25 12:20:10.015214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:22:09.094 [2024-11-25 12:20:10.015222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.094 [2024-11-25 12:20:10.038313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.094 [2024-11-25 12:20:10.038445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:09.094 [2024-11-25 12:20:10.038461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.077 ms 00:22:09.094 [2024-11-25 12:20:10.038469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.094 [2024-11-25 12:20:10.062116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.094 [2024-11-25 12:20:10.062165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:09.094 [2024-11-25 12:20:10.062178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.617 ms 00:22:09.094 [2024-11-25 12:20:10.062186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.094 [2024-11-25 12:20:10.084898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.094 [2024-11-25 12:20:10.084937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:09.094 [2024-11-25 12:20:10.084966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.676 ms 00:22:09.094 [2024-11-25 12:20:10.084974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.094 [2024-11-25 12:20:10.108208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.094 [2024-11-25 12:20:10.108373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:09.094 [2024-11-25 12:20:10.108390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.174 ms 00:22:09.094 [2024-11-25 12:20:10.108398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.094 [2024-11-25 12:20:10.108430] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:09.094 [2024-11-25 12:20:10.108446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:09.094 [2024-11-25 12:20:10.108462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:09.094 [2024-11-25 12:20:10.108470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:09.094 [2024-11-25 12:20:10.108478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:09.094 [2024-11-25 12:20:10.108486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:09.094 [2024-11-25 12:20:10.108494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:09.094 [2024-11-25 12:20:10.108502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:09.094 [2024-11-25 12:20:10.108509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:09.094 [2024-11-25 12:20:10.108517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:09.094 [2024-11-25 12:20:10.108525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:09.094 [2024-11-25 12:20:10.108533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:09.094 [2024-11-25 12:20:10.108541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:09.094 [2024-11-25 12:20:10.108548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:09.094 [2024-11-25 12:20:10.108558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:09.094 [2024-11-25 12:20:10.108570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:09.094 [2024-11-25 12:20:10.108584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:09.094 [2024-11-25 12:20:10.108597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:09.094 [2024-11-25 12:20:10.108609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:09.094 [2024-11-25 12:20:10.108621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:09.094 [2024-11-25 12:20:10.108629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:09.094 [2024-11-25 12:20:10.108636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:09.094 [2024-11-25 12:20:10.108645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:09.094 [2024-11-25 12:20:10.108653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:09.094 [2024-11-25 12:20:10.108660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:09.094 [2024-11-25 12:20:10.108667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:09.094 [2024-11-25 12:20:10.108675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:09.094 [2024-11-25 12:20:10.108682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:09.094 [2024-11-25 12:20:10.108691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:09.095 [2024-11-25 12:20:10.108698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:09.095 [2024-11-25 12:20:10.108706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:09.095 [2024-11-25 12:20:10.108714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:09.095 [2024-11-25 12:20:10.108723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:09.095 [2024-11-25 12:20:10.108730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:09.095 [2024-11-25 12:20:10.108738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:09.095 [2024-11-25 12:20:10.108745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:09.095 [2024-11-25 12:20:10.108753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:09.095 [2024-11-25 12:20:10.108760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:09.095 [2024-11-25 12:20:10.108767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:09.095 [2024-11-25 12:20:10.108775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:09.095 [2024-11-25 12:20:10.108782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:09.095 [2024-11-25 12:20:10.108789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:09.095 [2024-11-25 12:20:10.108796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:09.095 [2024-11-25 12:20:10.108803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:09.095 [2024-11-25 12:20:10.108810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:09.095 [2024-11-25 12:20:10.108817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:09.095 [2024-11-25 12:20:10.108825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:09.095 [2024-11-25 12:20:10.108832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:09.095 [2024-11-25 12:20:10.108839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:09.095 [2024-11-25 12:20:10.108847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:09.095 [2024-11-25 12:20:10.108854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:09.095 [2024-11-25 12:20:10.108862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:09.095 [2024-11-25 12:20:10.108869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:09.095 [2024-11-25 12:20:10.108876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:09.095 [2024-11-25 12:20:10.108884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:09.095 [2024-11-25 12:20:10.108891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:09.095 [2024-11-25 12:20:10.108898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:09.095 [2024-11-25 12:20:10.108921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:09.095 [2024-11-25 12:20:10.108929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:09.095 [2024-11-25 12:20:10.108936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:09.095 [2024-11-25 12:20:10.108943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:09.095 [2024-11-25 12:20:10.108973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:09.095 [2024-11-25 12:20:10.108981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:09.095 [2024-11-25 12:20:10.108989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:09.095 [2024-11-25 12:20:10.108997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:09.095 [2024-11-25 12:20:10.109005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:09.095 [2024-11-25 12:20:10.109013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:09.095 [2024-11-25 12:20:10.109021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:09.095 [2024-11-25 12:20:10.109029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:09.095 [2024-11-25 12:20:10.109036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:09.095 [2024-11-25 12:20:10.109044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:09.095 [2024-11-25 12:20:10.109051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:09.095 [2024-11-25 12:20:10.109059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:09.095 [2024-11-25 12:20:10.109066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:09.095 [2024-11-25 12:20:10.109075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:09.095 [2024-11-25 12:20:10.109082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:09.095 [2024-11-25 12:20:10.109090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:09.095 [2024-11-25 12:20:10.109098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:09.095 [2024-11-25 12:20:10.109106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:09.095 [2024-11-25 12:20:10.109113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:09.095 [2024-11-25 12:20:10.109120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:09.095 [2024-11-25 12:20:10.109128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:09.095 [2024-11-25 12:20:10.109136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:09.095 [2024-11-25 12:20:10.109143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:09.095 [2024-11-25 12:20:10.109151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:09.095 [2024-11-25 12:20:10.109159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:09.095 [2024-11-25 12:20:10.109167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:09.095 [2024-11-25 12:20:10.109174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:09.095 [2024-11-25 12:20:10.109181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:09.095 [2024-11-25 12:20:10.109188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:09.095 [2024-11-25 12:20:10.109195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:09.095 [2024-11-25 12:20:10.109202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:09.095 [2024-11-25 12:20:10.109219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:09.095 [2024-11-25 12:20:10.109226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:09.095 [2024-11-25 12:20:10.109234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:09.095 [2024-11-25 12:20:10.109241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:09.095 [2024-11-25 12:20:10.109249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:09.095 [2024-11-25 12:20:10.109256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:09.095 [2024-11-25 12:20:10.109264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:09.095 [2024-11-25 12:20:10.109272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:09.095 [2024-11-25 12:20:10.109280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:09.095 [2024-11-25 12:20:10.109295] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:09.095 [2024-11-25 12:20:10.109307] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f81f1b95-15f5-4eae-9c10-92a99a1dcc63 00:22:09.095 [2024-11-25 12:20:10.109315] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:09.095 [2024-11-25 12:20:10.109322] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:09.095 [2024-11-25 12:20:10.109329] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:09.095 [2024-11-25 12:20:10.109338] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:09.095 [2024-11-25 12:20:10.109345] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:09.095 [2024-11-25 12:20:10.109353] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:09.095 [2024-11-25 12:20:10.109366] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:09.095 [2024-11-25 12:20:10.109372] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:09.095 [2024-11-25 12:20:10.109378] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:09.095 [2024-11-25 12:20:10.109386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.095 [2024-11-25 12:20:10.109393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:09.095 [2024-11-25 12:20:10.109402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.955 ms 00:22:09.096 [2024-11-25 12:20:10.109410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.096 [2024-11-25 12:20:10.121862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.096 [2024-11-25 12:20:10.121897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:09.096 [2024-11-25 12:20:10.121908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.432 ms 00:22:09.096 [2024-11-25 12:20:10.121916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.096 [2024-11-25 12:20:10.122283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.096 [2024-11-25 12:20:10.122305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:09.096 [2024-11-25 12:20:10.122314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.338 ms 00:22:09.096 [2024-11-25 12:20:10.122326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.096 [2024-11-25 12:20:10.155822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.096 [2024-11-25 12:20:10.156004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:09.096 [2024-11-25 12:20:10.156023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.096 [2024-11-25 12:20:10.156031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.096 [2024-11-25 12:20:10.156088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.096 [2024-11-25 12:20:10.156097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:09.096 [2024-11-25 12:20:10.156105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.096 [2024-11-25 12:20:10.156117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.096 [2024-11-25 12:20:10.156174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.096 [2024-11-25 12:20:10.156184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:09.096 [2024-11-25 12:20:10.156192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.096 [2024-11-25 12:20:10.156200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.096 [2024-11-25 12:20:10.156214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.096 [2024-11-25 12:20:10.156222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:09.096 [2024-11-25 12:20:10.156229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.096 [2024-11-25 12:20:10.156237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.354 [2024-11-25 12:20:10.235689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.354 [2024-11-25 12:20:10.235879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:09.354 [2024-11-25 12:20:10.235898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.354 [2024-11-25 12:20:10.235907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.354 [2024-11-25 12:20:10.300174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.354 [2024-11-25 12:20:10.300226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:09.354 [2024-11-25 12:20:10.300238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.354 [2024-11-25 12:20:10.300247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.354 [2024-11-25 12:20:10.300323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.354 [2024-11-25 12:20:10.300334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:09.354 [2024-11-25 12:20:10.300343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.354 [2024-11-25 12:20:10.300351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.354 [2024-11-25 12:20:10.300383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.354 [2024-11-25 12:20:10.300392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:09.354 [2024-11-25 12:20:10.300400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.354 [2024-11-25 12:20:10.300407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.354 [2024-11-25 12:20:10.300494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.354 [2024-11-25 12:20:10.300504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:09.354 [2024-11-25 12:20:10.300512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.354 [2024-11-25 12:20:10.300519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.354 [2024-11-25 12:20:10.300546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.354 [2024-11-25 12:20:10.300555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:09.354 [2024-11-25 12:20:10.300562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.354 [2024-11-25 12:20:10.300569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.354 [2024-11-25 12:20:10.300602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.354 [2024-11-25 12:20:10.300613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:09.354 [2024-11-25 12:20:10.300621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.354 [2024-11-25 12:20:10.300628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.354 [2024-11-25 12:20:10.300666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.354 [2024-11-25 12:20:10.300675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:09.354 [2024-11-25 12:20:10.300682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.354 [2024-11-25 12:20:10.300690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.355 [2024-11-25 12:20:10.300797] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 342.445 ms, result 0 00:22:09.922 00:22:09.922 00:22:09.922 12:20:10 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:12.461 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:22:12.461 12:20:13 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:22:12.461 [2024-11-25 12:20:13.202907] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:22:12.461 [2024-11-25 12:20:13.203066] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77874 ] 00:22:12.461 [2024-11-25 12:20:13.364837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:12.461 [2024-11-25 12:20:13.462173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:12.719 [2024-11-25 12:20:13.710933] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:12.719 [2024-11-25 12:20:13.711013] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:12.979 [2024-11-25 12:20:13.864498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.979 [2024-11-25 12:20:13.864548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:12.979 [2024-11-25 12:20:13.864565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:12.979 [2024-11-25 12:20:13.864573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.979 [2024-11-25 12:20:13.864617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.979 [2024-11-25 12:20:13.864627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:12.979 [2024-11-25 12:20:13.864637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:22:12.979 [2024-11-25 12:20:13.864645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.979 [2024-11-25 12:20:13.864663] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:12.979 [2024-11-25 12:20:13.865342] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:12.979 [2024-11-25 12:20:13.865360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.979 [2024-11-25 12:20:13.865369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:12.979 [2024-11-25 12:20:13.865377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.701 ms 00:22:12.979 [2024-11-25 12:20:13.865384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.979 [2024-11-25 12:20:13.866832] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:12.979 [2024-11-25 12:20:13.879159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.979 [2024-11-25 12:20:13.879195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:12.979 [2024-11-25 12:20:13.879209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.328 ms 00:22:12.979 [2024-11-25 12:20:13.879217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.979 [2024-11-25 12:20:13.879274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.979 [2024-11-25 12:20:13.879283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:12.979 [2024-11-25 12:20:13.879291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:22:12.979 [2024-11-25 12:20:13.879298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.979 [2024-11-25 12:20:13.884074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.979 [2024-11-25 12:20:13.884106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:12.979 [2024-11-25 12:20:13.884116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.718 ms 00:22:12.979 [2024-11-25 12:20:13.884123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.979 [2024-11-25 12:20:13.884200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.979 [2024-11-25 12:20:13.884210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:12.979 [2024-11-25 12:20:13.884219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:22:12.979 [2024-11-25 12:20:13.884226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.979 [2024-11-25 12:20:13.884266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.979 [2024-11-25 12:20:13.884277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:12.979 [2024-11-25 12:20:13.884285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:12.979 [2024-11-25 12:20:13.884293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.979 [2024-11-25 12:20:13.884316] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:12.979 [2024-11-25 12:20:13.887697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.979 [2024-11-25 12:20:13.887723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:12.979 [2024-11-25 12:20:13.887733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.387 ms 00:22:12.979 [2024-11-25 12:20:13.887742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.979 [2024-11-25 12:20:13.887771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.979 [2024-11-25 12:20:13.887779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:12.979 [2024-11-25 12:20:13.887787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:12.979 [2024-11-25 12:20:13.887794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.979 [2024-11-25 12:20:13.887814] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:12.979 [2024-11-25 12:20:13.887832] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:12.979 [2024-11-25 12:20:13.887866] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:12.979 [2024-11-25 12:20:13.887882] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:12.979 [2024-11-25 12:20:13.888000] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:12.979 [2024-11-25 12:20:13.888013] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:12.979 [2024-11-25 12:20:13.888024] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:12.979 [2024-11-25 12:20:13.888035] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:12.979 [2024-11-25 12:20:13.888044] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:12.979 [2024-11-25 12:20:13.888052] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:12.979 [2024-11-25 12:20:13.888059] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:12.979 [2024-11-25 12:20:13.888068] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:12.979 [2024-11-25 12:20:13.888075] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:12.979 [2024-11-25 12:20:13.888085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.979 [2024-11-25 12:20:13.888092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:12.979 [2024-11-25 12:20:13.888100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:22:12.979 [2024-11-25 12:20:13.888108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.979 [2024-11-25 12:20:13.888191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.979 [2024-11-25 12:20:13.888200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:12.979 [2024-11-25 12:20:13.888208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:22:12.979 [2024-11-25 12:20:13.888214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.979 [2024-11-25 12:20:13.888329] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:12.979 [2024-11-25 12:20:13.888343] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:12.979 [2024-11-25 12:20:13.888352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:12.979 [2024-11-25 12:20:13.888360] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:12.979 [2024-11-25 12:20:13.888368] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:12.979 [2024-11-25 12:20:13.888374] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:12.979 [2024-11-25 12:20:13.888381] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:12.979 [2024-11-25 12:20:13.888388] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:12.979 [2024-11-25 12:20:13.888395] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:12.979 [2024-11-25 12:20:13.888401] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:12.979 [2024-11-25 12:20:13.888408] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:12.979 [2024-11-25 12:20:13.888415] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:12.979 [2024-11-25 12:20:13.888421] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:12.979 [2024-11-25 12:20:13.888429] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:12.979 [2024-11-25 12:20:13.888436] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:12.979 [2024-11-25 12:20:13.888448] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:12.979 [2024-11-25 12:20:13.888457] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:12.979 [2024-11-25 12:20:13.888463] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:12.980 [2024-11-25 12:20:13.888470] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:12.980 [2024-11-25 12:20:13.888477] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:12.980 [2024-11-25 12:20:13.888484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:12.980 [2024-11-25 12:20:13.888492] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:12.980 [2024-11-25 12:20:13.888498] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:12.980 [2024-11-25 12:20:13.888504] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:12.980 [2024-11-25 12:20:13.888511] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:12.980 [2024-11-25 12:20:13.888517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:12.980 [2024-11-25 12:20:13.888523] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:12.980 [2024-11-25 12:20:13.888529] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:12.980 [2024-11-25 12:20:13.888536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:12.980 [2024-11-25 12:20:13.888543] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:12.980 [2024-11-25 12:20:13.888550] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:12.980 [2024-11-25 12:20:13.888556] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:12.980 [2024-11-25 12:20:13.888563] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:12.980 [2024-11-25 12:20:13.888570] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:12.980 [2024-11-25 12:20:13.888576] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:12.980 [2024-11-25 12:20:13.888583] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:12.980 [2024-11-25 12:20:13.888590] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:12.980 [2024-11-25 12:20:13.888596] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:12.980 [2024-11-25 12:20:13.888603] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:12.980 [2024-11-25 12:20:13.888609] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:12.980 [2024-11-25 12:20:13.888616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:12.980 [2024-11-25 12:20:13.888622] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:12.980 [2024-11-25 12:20:13.888629] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:12.980 [2024-11-25 12:20:13.888636] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:12.980 [2024-11-25 12:20:13.888643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:12.980 [2024-11-25 12:20:13.888650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:12.980 [2024-11-25 12:20:13.888657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:12.980 [2024-11-25 12:20:13.888664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:12.980 [2024-11-25 12:20:13.888671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:12.980 [2024-11-25 12:20:13.888678] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:12.980 [2024-11-25 12:20:13.888685] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:12.980 [2024-11-25 12:20:13.888691] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:12.980 [2024-11-25 12:20:13.888697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:12.980 [2024-11-25 12:20:13.888705] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:12.980 [2024-11-25 12:20:13.888714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:12.980 [2024-11-25 12:20:13.888723] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:12.980 [2024-11-25 12:20:13.888730] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:12.980 [2024-11-25 12:20:13.888737] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:12.980 [2024-11-25 12:20:13.888743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:12.980 [2024-11-25 12:20:13.888750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:12.980 [2024-11-25 12:20:13.888757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:12.980 [2024-11-25 12:20:13.888764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:12.980 [2024-11-25 12:20:13.888771] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:12.980 [2024-11-25 12:20:13.888778] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:12.980 [2024-11-25 12:20:13.888785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:12.980 [2024-11-25 12:20:13.888792] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:12.980 [2024-11-25 12:20:13.888798] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:12.980 [2024-11-25 12:20:13.888805] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:12.980 [2024-11-25 12:20:13.888812] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:12.980 [2024-11-25 12:20:13.888819] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:12.980 [2024-11-25 12:20:13.888830] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:12.980 [2024-11-25 12:20:13.888838] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:12.980 [2024-11-25 12:20:13.888845] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:12.980 [2024-11-25 12:20:13.888852] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:12.980 [2024-11-25 12:20:13.888859] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:12.980 [2024-11-25 12:20:13.888866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.980 [2024-11-25 12:20:13.888874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:12.980 [2024-11-25 12:20:13.888882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.606 ms 00:22:12.980 [2024-11-25 12:20:13.888889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.980 [2024-11-25 12:20:13.914097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.980 [2024-11-25 12:20:13.914250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:12.980 [2024-11-25 12:20:13.914266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.169 ms 00:22:12.980 [2024-11-25 12:20:13.914274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.980 [2024-11-25 12:20:13.914361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.980 [2024-11-25 12:20:13.914369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:12.980 [2024-11-25 12:20:13.914377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:22:12.980 [2024-11-25 12:20:13.914384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.980 [2024-11-25 12:20:13.952004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.980 [2024-11-25 12:20:13.952179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:12.980 [2024-11-25 12:20:13.952198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.572 ms 00:22:12.980 [2024-11-25 12:20:13.952207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.980 [2024-11-25 12:20:13.952252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.980 [2024-11-25 12:20:13.952262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:12.980 [2024-11-25 12:20:13.952271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:12.980 [2024-11-25 12:20:13.952282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.980 [2024-11-25 12:20:13.952625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.980 [2024-11-25 12:20:13.952642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:12.980 [2024-11-25 12:20:13.952651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:22:12.980 [2024-11-25 12:20:13.952659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.980 [2024-11-25 12:20:13.952779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.980 [2024-11-25 12:20:13.952788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:12.980 [2024-11-25 12:20:13.952797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:22:12.980 [2024-11-25 12:20:13.952804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.980 [2024-11-25 12:20:13.965664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.980 [2024-11-25 12:20:13.965694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:12.980 [2024-11-25 12:20:13.965704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.836 ms 00:22:12.980 [2024-11-25 12:20:13.965714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.980 [2024-11-25 12:20:13.977771] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:12.980 [2024-11-25 12:20:13.977804] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:12.980 [2024-11-25 12:20:13.977815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.980 [2024-11-25 12:20:13.977823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:12.980 [2024-11-25 12:20:13.977832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.992 ms 00:22:12.980 [2024-11-25 12:20:13.977839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.980 [2024-11-25 12:20:14.001944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.980 [2024-11-25 12:20:14.001987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:12.980 [2024-11-25 12:20:14.001998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.069 ms 00:22:12.980 [2024-11-25 12:20:14.002006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.980 [2024-11-25 12:20:14.013497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.980 [2024-11-25 12:20:14.013527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:12.981 [2024-11-25 12:20:14.013537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.449 ms 00:22:12.981 [2024-11-25 12:20:14.013544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.981 [2024-11-25 12:20:14.024517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.981 [2024-11-25 12:20:14.024546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:12.981 [2024-11-25 12:20:14.024556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.941 ms 00:22:12.981 [2024-11-25 12:20:14.024563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.981 [2024-11-25 12:20:14.025176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.981 [2024-11-25 12:20:14.025200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:12.981 [2024-11-25 12:20:14.025209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.533 ms 00:22:12.981 [2024-11-25 12:20:14.025218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.239 [2024-11-25 12:20:14.080599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.239 [2024-11-25 12:20:14.080646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:13.239 [2024-11-25 12:20:14.080665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.364 ms 00:22:13.239 [2024-11-25 12:20:14.080673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.239 [2024-11-25 12:20:14.090896] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:13.239 [2024-11-25 12:20:14.093496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.239 [2024-11-25 12:20:14.093524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:13.239 [2024-11-25 12:20:14.093537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.775 ms 00:22:13.239 [2024-11-25 12:20:14.093546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.239 [2024-11-25 12:20:14.093640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.239 [2024-11-25 12:20:14.093651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:13.239 [2024-11-25 12:20:14.093660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:13.239 [2024-11-25 12:20:14.093670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.239 [2024-11-25 12:20:14.093732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.239 [2024-11-25 12:20:14.093743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:13.239 [2024-11-25 12:20:14.093751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:22:13.239 [2024-11-25 12:20:14.093758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.239 [2024-11-25 12:20:14.093776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.239 [2024-11-25 12:20:14.093784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:13.239 [2024-11-25 12:20:14.093793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:13.239 [2024-11-25 12:20:14.093800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.239 [2024-11-25 12:20:14.093829] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:13.239 [2024-11-25 12:20:14.093840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.239 [2024-11-25 12:20:14.093848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:13.239 [2024-11-25 12:20:14.093856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:13.239 [2024-11-25 12:20:14.093863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.239 [2024-11-25 12:20:14.116536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.239 [2024-11-25 12:20:14.116569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:13.239 [2024-11-25 12:20:14.116582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.656 ms 00:22:13.239 [2024-11-25 12:20:14.116594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.239 [2024-11-25 12:20:14.116665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.239 [2024-11-25 12:20:14.116675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:13.239 [2024-11-25 12:20:14.116683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:22:13.239 [2024-11-25 12:20:14.116691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.239 [2024-11-25 12:20:14.118090] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 253.159 ms, result 0 00:22:14.171  [2024-11-25T12:20:16.184Z] Copying: 43/1024 [MB] (43 MBps) [2024-11-25T12:20:17.557Z] Copying: 86/1024 [MB] (42 MBps) [2024-11-25T12:20:18.492Z] Copying: 130/1024 [MB] (44 MBps) [2024-11-25T12:20:19.426Z] Copying: 174/1024 [MB] (43 MBps) [2024-11-25T12:20:20.359Z] Copying: 219/1024 [MB] (44 MBps) [2024-11-25T12:20:21.293Z] Copying: 263/1024 [MB] (44 MBps) [2024-11-25T12:20:22.224Z] Copying: 308/1024 [MB] (44 MBps) [2024-11-25T12:20:23.242Z] Copying: 352/1024 [MB] (44 MBps) [2024-11-25T12:20:24.237Z] Copying: 397/1024 [MB] (44 MBps) [2024-11-25T12:20:25.270Z] Copying: 439/1024 [MB] (42 MBps) [2024-11-25T12:20:26.228Z] Copying: 482/1024 [MB] (42 MBps) [2024-11-25T12:20:27.160Z] Copying: 526/1024 [MB] (43 MBps) [2024-11-25T12:20:28.537Z] Copying: 574/1024 [MB] (47 MBps) [2024-11-25T12:20:29.474Z] Copying: 611/1024 [MB] (37 MBps) [2024-11-25T12:20:30.405Z] Copying: 632/1024 [MB] (20 MBps) [2024-11-25T12:20:31.347Z] Copying: 672/1024 [MB] (39 MBps) [2024-11-25T12:20:32.289Z] Copying: 708/1024 [MB] (36 MBps) [2024-11-25T12:20:33.295Z] Copying: 732/1024 [MB] (23 MBps) [2024-11-25T12:20:34.238Z] Copying: 754/1024 [MB] (21 MBps) [2024-11-25T12:20:35.177Z] Copying: 769/1024 [MB] (15 MBps) [2024-11-25T12:20:36.549Z] Copying: 792/1024 [MB] (22 MBps) [2024-11-25T12:20:37.486Z] Copying: 819/1024 [MB] (27 MBps) [2024-11-25T12:20:38.426Z] Copying: 853/1024 [MB] (33 MBps) [2024-11-25T12:20:39.373Z] Copying: 891/1024 [MB] (38 MBps) [2024-11-25T12:20:40.312Z] Copying: 914/1024 [MB] (23 MBps) [2024-11-25T12:20:41.249Z] Copying: 936/1024 [MB] (21 MBps) [2024-11-25T12:20:42.201Z] Copying: 952/1024 [MB] (16 MBps) [2024-11-25T12:20:43.144Z] Copying: 967/1024 [MB] (15 MBps) [2024-11-25T12:20:44.529Z] Copying: 1000144/1048576 [kB] (9308 kBps) [2024-11-25T12:20:45.462Z] Copying: 1009992/1048576 [kB] (9848 kBps) [2024-11-25T12:20:46.028Z] Copying: 1023/1024 [MB] (36 MBps) [2024-11-25T12:20:46.028Z] Copying: 1024/1024 [MB] (average 32 MBps)[2024-11-25 12:20:45.991102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.948 [2024-11-25 12:20:45.991162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:44.948 [2024-11-25 12:20:45.991177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:44.948 [2024-11-25 12:20:45.991194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.948 [2024-11-25 12:20:45.995640] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:44.948 [2024-11-25 12:20:45.999448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.948 [2024-11-25 12:20:45.999481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:44.948 [2024-11-25 12:20:45.999494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.660 ms 00:22:44.948 [2024-11-25 12:20:45.999502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.948 [2024-11-25 12:20:46.010285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.948 [2024-11-25 12:20:46.010408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:44.948 [2024-11-25 12:20:46.010424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.841 ms 00:22:44.948 [2024-11-25 12:20:46.010433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.228 [2024-11-25 12:20:46.027304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.228 [2024-11-25 12:20:46.027338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:45.228 [2024-11-25 12:20:46.027348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.848 ms 00:22:45.228 [2024-11-25 12:20:46.027357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.228 [2024-11-25 12:20:46.033615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.228 [2024-11-25 12:20:46.033733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:45.228 [2024-11-25 12:20:46.033748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.231 ms 00:22:45.228 [2024-11-25 12:20:46.033756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.228 [2024-11-25 12:20:46.056919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.228 [2024-11-25 12:20:46.057054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:45.228 [2024-11-25 12:20:46.057115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.110 ms 00:22:45.228 [2024-11-25 12:20:46.057138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.228 [2024-11-25 12:20:46.070842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.228 [2024-11-25 12:20:46.070985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:45.228 [2024-11-25 12:20:46.071044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.663 ms 00:22:45.228 [2024-11-25 12:20:46.071067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.228 [2024-11-25 12:20:46.120866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.228 [2024-11-25 12:20:46.121004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:45.228 [2024-11-25 12:20:46.121057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.756 ms 00:22:45.228 [2024-11-25 12:20:46.121079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.228 [2024-11-25 12:20:46.144371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.228 [2024-11-25 12:20:46.144499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:45.228 [2024-11-25 12:20:46.144551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.263 ms 00:22:45.228 [2024-11-25 12:20:46.144575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.228 [2024-11-25 12:20:46.167883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.228 [2024-11-25 12:20:46.168032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:45.228 [2024-11-25 12:20:46.168085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.238 ms 00:22:45.228 [2024-11-25 12:20:46.168107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.228 [2024-11-25 12:20:46.189921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.228 [2024-11-25 12:20:46.190055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:45.228 [2024-11-25 12:20:46.190122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.772 ms 00:22:45.228 [2024-11-25 12:20:46.190145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.228 [2024-11-25 12:20:46.212741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.228 [2024-11-25 12:20:46.212855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:45.228 [2024-11-25 12:20:46.212903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.535 ms 00:22:45.228 [2024-11-25 12:20:46.212924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.228 [2024-11-25 12:20:46.212976] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:45.228 [2024-11-25 12:20:46.213005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 119296 / 261120 wr_cnt: 1 state: open 00:22:45.228 [2024-11-25 12:20:46.213036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:45.228 [2024-11-25 12:20:46.213065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:45.228 [2024-11-25 12:20:46.213093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:45.228 [2024-11-25 12:20:46.213225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:45.228 [2024-11-25 12:20:46.213256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:45.228 [2024-11-25 12:20:46.213323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:45.228 [2024-11-25 12:20:46.213352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:45.228 [2024-11-25 12:20:46.213405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:45.228 [2024-11-25 12:20:46.213437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:45.228 [2024-11-25 12:20:46.213474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:45.228 [2024-11-25 12:20:46.213503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:45.228 [2024-11-25 12:20:46.213692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:45.228 [2024-11-25 12:20:46.213723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:45.228 [2024-11-25 12:20:46.213773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:45.228 [2024-11-25 12:20:46.213805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:45.228 [2024-11-25 12:20:46.213833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:45.228 [2024-11-25 12:20:46.213883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:45.228 [2024-11-25 12:20:46.213913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:45.228 [2024-11-25 12:20:46.213941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:45.228 [2024-11-25 12:20:46.214089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.214118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.214146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.214174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.214241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.214271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.214299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.214327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.214381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.214442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.214522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.214551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.214579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.214638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.214669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.214698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.214725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.214804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.214858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.214888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.214916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.214996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.215029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.215056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.215084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.215142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.215172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.215233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.215263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.215312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.215343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.215401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.215430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.215481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.215510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.215539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.215569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.215632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.215662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.215689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.215718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.215773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.215833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.215878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.215887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.215895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.215903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.215911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.215918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.215926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.215935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.215943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.215960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.215968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.215975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.215983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.215991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.215998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.216006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.216013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.216021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.216028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.216036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.216044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.216051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.216059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.216066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.216073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.216080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.216087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.216095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.216103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.216110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.216119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.216127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.216134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.216141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.216148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.216155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.216162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:45.229 [2024-11-25 12:20:46.216178] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:45.229 [2024-11-25 12:20:46.216186] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f81f1b95-15f5-4eae-9c10-92a99a1dcc63 00:22:45.229 [2024-11-25 12:20:46.216195] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 119296 00:22:45.229 [2024-11-25 12:20:46.216202] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 120256 00:22:45.229 [2024-11-25 12:20:46.216209] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 119296 00:22:45.229 [2024-11-25 12:20:46.216217] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0080 00:22:45.229 [2024-11-25 12:20:46.216224] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:45.229 [2024-11-25 12:20:46.216236] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:45.229 [2024-11-25 12:20:46.216249] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:45.229 [2024-11-25 12:20:46.216256] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:45.229 [2024-11-25 12:20:46.216262] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:45.229 [2024-11-25 12:20:46.216270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.229 [2024-11-25 12:20:46.216277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:45.229 [2024-11-25 12:20:46.216286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.294 ms 00:22:45.230 [2024-11-25 12:20:46.216293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.230 [2024-11-25 12:20:46.228740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.230 [2024-11-25 12:20:46.228773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:45.230 [2024-11-25 12:20:46.228784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.428 ms 00:22:45.230 [2024-11-25 12:20:46.228796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.230 [2024-11-25 12:20:46.229169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.230 [2024-11-25 12:20:46.229180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:45.230 [2024-11-25 12:20:46.229188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.339 ms 00:22:45.230 [2024-11-25 12:20:46.229195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.230 [2024-11-25 12:20:46.261572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:45.230 [2024-11-25 12:20:46.261710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:45.230 [2024-11-25 12:20:46.261731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:45.230 [2024-11-25 12:20:46.261739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.230 [2024-11-25 12:20:46.261800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:45.230 [2024-11-25 12:20:46.261809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:45.230 [2024-11-25 12:20:46.261817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:45.230 [2024-11-25 12:20:46.261824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.230 [2024-11-25 12:20:46.261880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:45.230 [2024-11-25 12:20:46.261890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:45.230 [2024-11-25 12:20:46.261897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:45.230 [2024-11-25 12:20:46.261908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.230 [2024-11-25 12:20:46.261923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:45.230 [2024-11-25 12:20:46.261930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:45.230 [2024-11-25 12:20:46.261937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:45.230 [2024-11-25 12:20:46.261944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.488 [2024-11-25 12:20:46.338931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:45.488 [2024-11-25 12:20:46.338982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:45.488 [2024-11-25 12:20:46.338998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:45.488 [2024-11-25 12:20:46.339005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.488 [2024-11-25 12:20:46.401864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:45.488 [2024-11-25 12:20:46.401906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:45.488 [2024-11-25 12:20:46.401916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:45.488 [2024-11-25 12:20:46.401924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.488 [2024-11-25 12:20:46.402009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:45.488 [2024-11-25 12:20:46.402019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:45.488 [2024-11-25 12:20:46.402027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:45.488 [2024-11-25 12:20:46.402034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.488 [2024-11-25 12:20:46.402071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:45.488 [2024-11-25 12:20:46.402080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:45.488 [2024-11-25 12:20:46.402088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:45.488 [2024-11-25 12:20:46.402095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.488 [2024-11-25 12:20:46.402176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:45.488 [2024-11-25 12:20:46.402185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:45.488 [2024-11-25 12:20:46.402193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:45.488 [2024-11-25 12:20:46.402201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.488 [2024-11-25 12:20:46.402235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:45.488 [2024-11-25 12:20:46.402244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:45.488 [2024-11-25 12:20:46.402251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:45.488 [2024-11-25 12:20:46.402259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.488 [2024-11-25 12:20:46.402292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:45.488 [2024-11-25 12:20:46.402300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:45.488 [2024-11-25 12:20:46.402308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:45.488 [2024-11-25 12:20:46.402315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.488 [2024-11-25 12:20:46.402355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:45.488 [2024-11-25 12:20:46.402365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:45.488 [2024-11-25 12:20:46.402373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:45.488 [2024-11-25 12:20:46.402380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.488 [2024-11-25 12:20:46.402487] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 412.950 ms, result 0 00:22:50.749 00:22:50.749 00:22:50.749 12:20:50 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:22:50.749 [2024-11-25 12:20:50.911645] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:22:50.749 [2024-11-25 12:20:50.911764] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78226 ] 00:22:50.749 [2024-11-25 12:20:51.071645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:50.749 [2024-11-25 12:20:51.168836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:50.749 [2024-11-25 12:20:51.420570] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:50.749 [2024-11-25 12:20:51.420631] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:50.749 [2024-11-25 12:20:51.576461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.749 [2024-11-25 12:20:51.576513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:50.749 [2024-11-25 12:20:51.576530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:50.749 [2024-11-25 12:20:51.576538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.749 [2024-11-25 12:20:51.576581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.749 [2024-11-25 12:20:51.576590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:50.749 [2024-11-25 12:20:51.576600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:22:50.749 [2024-11-25 12:20:51.576607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.749 [2024-11-25 12:20:51.576626] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:50.749 [2024-11-25 12:20:51.577337] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:50.749 [2024-11-25 12:20:51.577353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.749 [2024-11-25 12:20:51.577361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:50.749 [2024-11-25 12:20:51.577369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.731 ms 00:22:50.749 [2024-11-25 12:20:51.577377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.749 [2024-11-25 12:20:51.578425] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:50.749 [2024-11-25 12:20:51.590492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.749 [2024-11-25 12:20:51.590524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:50.749 [2024-11-25 12:20:51.590536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.068 ms 00:22:50.749 [2024-11-25 12:20:51.590544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.749 [2024-11-25 12:20:51.590595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.749 [2024-11-25 12:20:51.590605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:50.749 [2024-11-25 12:20:51.590613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:22:50.749 [2024-11-25 12:20:51.590620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.749 [2024-11-25 12:20:51.595456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.749 [2024-11-25 12:20:51.595594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:50.749 [2024-11-25 12:20:51.595609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.779 ms 00:22:50.749 [2024-11-25 12:20:51.595616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.749 [2024-11-25 12:20:51.595686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.749 [2024-11-25 12:20:51.595694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:50.749 [2024-11-25 12:20:51.595703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:22:50.749 [2024-11-25 12:20:51.595710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.749 [2024-11-25 12:20:51.595752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.749 [2024-11-25 12:20:51.595761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:50.749 [2024-11-25 12:20:51.595769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:50.749 [2024-11-25 12:20:51.595777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.749 [2024-11-25 12:20:51.595797] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:50.749 [2024-11-25 12:20:51.599081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.749 [2024-11-25 12:20:51.599108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:50.749 [2024-11-25 12:20:51.599117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.288 ms 00:22:50.749 [2024-11-25 12:20:51.599127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.749 [2024-11-25 12:20:51.599153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.749 [2024-11-25 12:20:51.599161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:50.750 [2024-11-25 12:20:51.599169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:50.750 [2024-11-25 12:20:51.599176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.750 [2024-11-25 12:20:51.599195] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:50.750 [2024-11-25 12:20:51.599212] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:50.750 [2024-11-25 12:20:51.599246] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:50.750 [2024-11-25 12:20:51.599263] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:50.750 [2024-11-25 12:20:51.599363] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:50.750 [2024-11-25 12:20:51.599374] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:50.750 [2024-11-25 12:20:51.599385] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:50.750 [2024-11-25 12:20:51.599394] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:50.750 [2024-11-25 12:20:51.599402] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:50.750 [2024-11-25 12:20:51.599410] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:50.750 [2024-11-25 12:20:51.599418] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:50.750 [2024-11-25 12:20:51.599425] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:50.750 [2024-11-25 12:20:51.599432] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:50.750 [2024-11-25 12:20:51.599441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.750 [2024-11-25 12:20:51.599448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:50.750 [2024-11-25 12:20:51.599455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:22:50.750 [2024-11-25 12:20:51.599461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.750 [2024-11-25 12:20:51.599543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.750 [2024-11-25 12:20:51.599551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:50.750 [2024-11-25 12:20:51.599558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:22:50.750 [2024-11-25 12:20:51.599565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.750 [2024-11-25 12:20:51.599665] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:50.750 [2024-11-25 12:20:51.599677] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:50.750 [2024-11-25 12:20:51.599685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:50.750 [2024-11-25 12:20:51.599692] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:50.750 [2024-11-25 12:20:51.599700] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:50.750 [2024-11-25 12:20:51.599707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:50.750 [2024-11-25 12:20:51.599713] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:50.750 [2024-11-25 12:20:51.599721] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:50.750 [2024-11-25 12:20:51.599728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:50.750 [2024-11-25 12:20:51.599734] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:50.750 [2024-11-25 12:20:51.599741] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:50.750 [2024-11-25 12:20:51.599748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:50.750 [2024-11-25 12:20:51.599754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:50.750 [2024-11-25 12:20:51.599760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:50.750 [2024-11-25 12:20:51.599767] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:50.750 [2024-11-25 12:20:51.599778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:50.750 [2024-11-25 12:20:51.599785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:50.750 [2024-11-25 12:20:51.599795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:50.750 [2024-11-25 12:20:51.599801] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:50.750 [2024-11-25 12:20:51.599807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:50.750 [2024-11-25 12:20:51.599814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:50.750 [2024-11-25 12:20:51.599820] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:50.750 [2024-11-25 12:20:51.599827] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:50.750 [2024-11-25 12:20:51.599833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:50.750 [2024-11-25 12:20:51.599839] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:50.750 [2024-11-25 12:20:51.599846] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:50.750 [2024-11-25 12:20:51.599852] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:50.750 [2024-11-25 12:20:51.599858] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:50.750 [2024-11-25 12:20:51.599865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:50.750 [2024-11-25 12:20:51.599871] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:50.750 [2024-11-25 12:20:51.599878] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:50.750 [2024-11-25 12:20:51.599884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:50.750 [2024-11-25 12:20:51.599891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:50.750 [2024-11-25 12:20:51.599897] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:50.750 [2024-11-25 12:20:51.599903] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:50.750 [2024-11-25 12:20:51.599910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:50.750 [2024-11-25 12:20:51.599916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:50.750 [2024-11-25 12:20:51.599922] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:50.750 [2024-11-25 12:20:51.599929] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:50.750 [2024-11-25 12:20:51.599935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:50.750 [2024-11-25 12:20:51.599942] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:50.750 [2024-11-25 12:20:51.599973] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:50.750 [2024-11-25 12:20:51.599980] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:50.750 [2024-11-25 12:20:51.599987] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:50.750 [2024-11-25 12:20:51.599995] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:50.750 [2024-11-25 12:20:51.600002] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:50.750 [2024-11-25 12:20:51.600009] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:50.750 [2024-11-25 12:20:51.600016] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:50.750 [2024-11-25 12:20:51.600024] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:50.750 [2024-11-25 12:20:51.600030] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:50.750 [2024-11-25 12:20:51.600037] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:50.750 [2024-11-25 12:20:51.600044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:50.750 [2024-11-25 12:20:51.600050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:50.750 [2024-11-25 12:20:51.600059] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:50.750 [2024-11-25 12:20:51.600068] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:50.750 [2024-11-25 12:20:51.600084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:50.750 [2024-11-25 12:20:51.600092] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:50.750 [2024-11-25 12:20:51.600098] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:50.750 [2024-11-25 12:20:51.600105] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:50.750 [2024-11-25 12:20:51.600112] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:50.750 [2024-11-25 12:20:51.600119] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:50.750 [2024-11-25 12:20:51.600126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:50.750 [2024-11-25 12:20:51.600133] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:50.750 [2024-11-25 12:20:51.600140] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:50.750 [2024-11-25 12:20:51.600146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:50.750 [2024-11-25 12:20:51.600154] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:50.750 [2024-11-25 12:20:51.600160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:50.750 [2024-11-25 12:20:51.600167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:50.750 [2024-11-25 12:20:51.600174] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:50.750 [2024-11-25 12:20:51.600181] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:50.750 [2024-11-25 12:20:51.600191] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:50.750 [2024-11-25 12:20:51.600199] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:50.750 [2024-11-25 12:20:51.600206] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:50.750 [2024-11-25 12:20:51.600213] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:50.750 [2024-11-25 12:20:51.600220] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:50.751 [2024-11-25 12:20:51.600227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.751 [2024-11-25 12:20:51.600235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:50.751 [2024-11-25 12:20:51.600242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.630 ms 00:22:50.751 [2024-11-25 12:20:51.600249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.751 [2024-11-25 12:20:51.625924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.751 [2024-11-25 12:20:51.625975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:50.751 [2024-11-25 12:20:51.625986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.622 ms 00:22:50.751 [2024-11-25 12:20:51.626005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.751 [2024-11-25 12:20:51.626094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.751 [2024-11-25 12:20:51.626102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:50.751 [2024-11-25 12:20:51.626110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:22:50.751 [2024-11-25 12:20:51.626117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.751 [2024-11-25 12:20:51.675164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.751 [2024-11-25 12:20:51.675333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:50.751 [2024-11-25 12:20:51.675352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.995 ms 00:22:50.751 [2024-11-25 12:20:51.675360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.751 [2024-11-25 12:20:51.675409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.751 [2024-11-25 12:20:51.675420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:50.751 [2024-11-25 12:20:51.675429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:50.751 [2024-11-25 12:20:51.675440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.751 [2024-11-25 12:20:51.675790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.751 [2024-11-25 12:20:51.675806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:50.751 [2024-11-25 12:20:51.675815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:22:50.751 [2024-11-25 12:20:51.675823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.751 [2024-11-25 12:20:51.675979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.751 [2024-11-25 12:20:51.675990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:50.751 [2024-11-25 12:20:51.675998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 00:22:50.751 [2024-11-25 12:20:51.676010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.751 [2024-11-25 12:20:51.688728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.751 [2024-11-25 12:20:51.688761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:50.751 [2024-11-25 12:20:51.688773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.699 ms 00:22:50.751 [2024-11-25 12:20:51.688780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.751 [2024-11-25 12:20:51.700998] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:22:50.751 [2024-11-25 12:20:51.701032] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:50.751 [2024-11-25 12:20:51.701044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.751 [2024-11-25 12:20:51.701053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:50.751 [2024-11-25 12:20:51.701062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.169 ms 00:22:50.751 [2024-11-25 12:20:51.701069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.751 [2024-11-25 12:20:51.725384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.751 [2024-11-25 12:20:51.725421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:50.751 [2024-11-25 12:20:51.725433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.278 ms 00:22:50.751 [2024-11-25 12:20:51.725441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.751 [2024-11-25 12:20:51.736476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.751 [2024-11-25 12:20:51.736513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:50.751 [2024-11-25 12:20:51.736523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.993 ms 00:22:50.751 [2024-11-25 12:20:51.736530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.751 [2024-11-25 12:20:51.747785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.751 [2024-11-25 12:20:51.747811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:50.751 [2024-11-25 12:20:51.747821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.224 ms 00:22:50.751 [2024-11-25 12:20:51.747828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.751 [2024-11-25 12:20:51.748431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.751 [2024-11-25 12:20:51.748455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:50.751 [2024-11-25 12:20:51.748464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.527 ms 00:22:50.751 [2024-11-25 12:20:51.748474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.751 [2024-11-25 12:20:51.803511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.751 [2024-11-25 12:20:51.803562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:50.751 [2024-11-25 12:20:51.803580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.019 ms 00:22:50.751 [2024-11-25 12:20:51.803588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.751 [2024-11-25 12:20:51.813742] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:50.751 [2024-11-25 12:20:51.816056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.751 [2024-11-25 12:20:51.816086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:50.751 [2024-11-25 12:20:51.816097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.425 ms 00:22:50.751 [2024-11-25 12:20:51.816106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.751 [2024-11-25 12:20:51.816189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.751 [2024-11-25 12:20:51.816200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:50.751 [2024-11-25 12:20:51.816209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:50.751 [2024-11-25 12:20:51.816219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.751 [2024-11-25 12:20:51.817563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.751 [2024-11-25 12:20:51.817688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:50.751 [2024-11-25 12:20:51.817705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.308 ms 00:22:50.751 [2024-11-25 12:20:51.817712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.751 [2024-11-25 12:20:51.817737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.751 [2024-11-25 12:20:51.817745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:50.751 [2024-11-25 12:20:51.817753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:50.751 [2024-11-25 12:20:51.817761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.751 [2024-11-25 12:20:51.817794] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:50.751 [2024-11-25 12:20:51.817807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.751 [2024-11-25 12:20:51.817814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:50.751 [2024-11-25 12:20:51.817821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:50.751 [2024-11-25 12:20:51.817829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.009 [2024-11-25 12:20:51.840399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.009 [2024-11-25 12:20:51.840554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:51.010 [2024-11-25 12:20:51.840571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.554 ms 00:22:51.010 [2024-11-25 12:20:51.840583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.010 [2024-11-25 12:20:51.840647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.010 [2024-11-25 12:20:51.840657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:51.010 [2024-11-25 12:20:51.840664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:22:51.010 [2024-11-25 12:20:51.840671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.010 [2024-11-25 12:20:51.841548] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 264.675 ms, result 0 00:22:52.391  [2024-11-25T12:20:54.042Z] Copying: 29/1024 [MB] (29 MBps) [2024-11-25T12:20:55.424Z] Copying: 61/1024 [MB] (31 MBps) [2024-11-25T12:20:56.359Z] Copying: 83/1024 [MB] (22 MBps) [2024-11-25T12:20:57.296Z] Copying: 126/1024 [MB] (43 MBps) [2024-11-25T12:20:58.229Z] Copying: 178/1024 [MB] (51 MBps) [2024-11-25T12:20:59.161Z] Copying: 226/1024 [MB] (48 MBps) [2024-11-25T12:21:00.096Z] Copying: 272/1024 [MB] (45 MBps) [2024-11-25T12:21:01.030Z] Copying: 320/1024 [MB] (47 MBps) [2024-11-25T12:21:02.404Z] Copying: 368/1024 [MB] (48 MBps) [2024-11-25T12:21:03.335Z] Copying: 416/1024 [MB] (48 MBps) [2024-11-25T12:21:04.269Z] Copying: 466/1024 [MB] (49 MBps) [2024-11-25T12:21:05.203Z] Copying: 514/1024 [MB] (48 MBps) [2024-11-25T12:21:06.136Z] Copying: 561/1024 [MB] (47 MBps) [2024-11-25T12:21:07.069Z] Copying: 609/1024 [MB] (47 MBps) [2024-11-25T12:21:08.032Z] Copying: 660/1024 [MB] (50 MBps) [2024-11-25T12:21:09.406Z] Copying: 709/1024 [MB] (49 MBps) [2024-11-25T12:21:10.338Z] Copying: 757/1024 [MB] (48 MBps) [2024-11-25T12:21:11.270Z] Copying: 807/1024 [MB] (49 MBps) [2024-11-25T12:21:12.203Z] Copying: 854/1024 [MB] (46 MBps) [2024-11-25T12:21:13.137Z] Copying: 904/1024 [MB] (50 MBps) [2024-11-25T12:21:14.092Z] Copying: 952/1024 [MB] (47 MBps) [2024-11-25T12:21:14.705Z] Copying: 1000/1024 [MB] (48 MBps) [2024-11-25T12:21:16.082Z] Copying: 1024/1024 [MB] (average 45 MBps)[2024-11-25 12:21:15.727765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.002 [2024-11-25 12:21:15.727831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:15.002 [2024-11-25 12:21:15.727847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:15.002 [2024-11-25 12:21:15.727856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.002 [2024-11-25 12:21:15.727896] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:15.002 [2024-11-25 12:21:15.732362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.002 [2024-11-25 12:21:15.732398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:15.002 [2024-11-25 12:21:15.732411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.449 ms 00:23:15.002 [2024-11-25 12:21:15.732606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.002 [2024-11-25 12:21:15.732872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.002 [2024-11-25 12:21:15.732895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:15.002 [2024-11-25 12:21:15.732906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.238 ms 00:23:15.002 [2024-11-25 12:21:15.732914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.002 [2024-11-25 12:21:15.738005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.002 [2024-11-25 12:21:15.738129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:15.002 [2024-11-25 12:21:15.738145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.070 ms 00:23:15.002 [2024-11-25 12:21:15.738154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.002 [2024-11-25 12:21:15.744312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.002 [2024-11-25 12:21:15.744337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:15.002 [2024-11-25 12:21:15.744346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.128 ms 00:23:15.002 [2024-11-25 12:21:15.744354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.003 [2024-11-25 12:21:15.768233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.003 [2024-11-25 12:21:15.768264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:15.003 [2024-11-25 12:21:15.768274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.840 ms 00:23:15.003 [2024-11-25 12:21:15.768282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.003 [2024-11-25 12:21:15.781540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.003 [2024-11-25 12:21:15.781666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:15.003 [2024-11-25 12:21:15.781682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.227 ms 00:23:15.003 [2024-11-25 12:21:15.781690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.003 [2024-11-25 12:21:15.851571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.003 [2024-11-25 12:21:15.851622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:15.003 [2024-11-25 12:21:15.851635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.844 ms 00:23:15.003 [2024-11-25 12:21:15.851644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.003 [2024-11-25 12:21:15.874383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.003 [2024-11-25 12:21:15.874420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:15.003 [2024-11-25 12:21:15.874431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.723 ms 00:23:15.003 [2024-11-25 12:21:15.874439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.003 [2024-11-25 12:21:15.896670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.003 [2024-11-25 12:21:15.896701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:15.003 [2024-11-25 12:21:15.896718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.200 ms 00:23:15.003 [2024-11-25 12:21:15.896726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.003 [2024-11-25 12:21:15.919041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.003 [2024-11-25 12:21:15.919075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:15.003 [2024-11-25 12:21:15.919085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.283 ms 00:23:15.003 [2024-11-25 12:21:15.919093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.003 [2024-11-25 12:21:15.941357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.003 [2024-11-25 12:21:15.941389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:15.003 [2024-11-25 12:21:15.941399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.212 ms 00:23:15.003 [2024-11-25 12:21:15.941406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.003 [2024-11-25 12:21:15.941437] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:15.003 [2024-11-25 12:21:15.941452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:23:15.003 [2024-11-25 12:21:15.941462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:15.003 [2024-11-25 12:21:15.941470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:15.003 [2024-11-25 12:21:15.941486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:15.003 [2024-11-25 12:21:15.941493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:15.003 [2024-11-25 12:21:15.941501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:15.003 [2024-11-25 12:21:15.941509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:15.003 [2024-11-25 12:21:15.941516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:15.003 [2024-11-25 12:21:15.941524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:15.003 [2024-11-25 12:21:15.941532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:15.003 [2024-11-25 12:21:15.941539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:15.003 [2024-11-25 12:21:15.941547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:15.003 [2024-11-25 12:21:15.941554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:15.003 [2024-11-25 12:21:15.941561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:15.003 [2024-11-25 12:21:15.941568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:15.003 [2024-11-25 12:21:15.941576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:15.003 [2024-11-25 12:21:15.941584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:15.003 [2024-11-25 12:21:15.941592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:15.003 [2024-11-25 12:21:15.941599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:15.003 [2024-11-25 12:21:15.941606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:15.003 [2024-11-25 12:21:15.941613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:15.003 [2024-11-25 12:21:15.941621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:15.003 [2024-11-25 12:21:15.941628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:15.003 [2024-11-25 12:21:15.941635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:15.003 [2024-11-25 12:21:15.941642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:15.003 [2024-11-25 12:21:15.941650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:15.003 [2024-11-25 12:21:15.941669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:15.003 [2024-11-25 12:21:15.941677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:15.003 [2024-11-25 12:21:15.941684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:15.003 [2024-11-25 12:21:15.941692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:15.003 [2024-11-25 12:21:15.941699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:15.003 [2024-11-25 12:21:15.941707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:15.003 [2024-11-25 12:21:15.941715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:15.003 [2024-11-25 12:21:15.941723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:15.003 [2024-11-25 12:21:15.941731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:15.003 [2024-11-25 12:21:15.941739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:15.003 [2024-11-25 12:21:15.941747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:15.003 [2024-11-25 12:21:15.941754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:15.003 [2024-11-25 12:21:15.941761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:15.003 [2024-11-25 12:21:15.941768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:15.003 [2024-11-25 12:21:15.941776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:15.003 [2024-11-25 12:21:15.941783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:15.004 [2024-11-25 12:21:15.941792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:15.004 [2024-11-25 12:21:15.941799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:15.004 [2024-11-25 12:21:15.941807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:15.004 [2024-11-25 12:21:15.941815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:15.004 [2024-11-25 12:21:15.941822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:15.004 [2024-11-25 12:21:15.941830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:15.004 [2024-11-25 12:21:15.941837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:15.004 [2024-11-25 12:21:15.941844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:15.004 [2024-11-25 12:21:15.941851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:15.004 [2024-11-25 12:21:15.941859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:15.004 [2024-11-25 12:21:15.941866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:15.004 [2024-11-25 12:21:15.941873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:15.004 [2024-11-25 12:21:15.941880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:15.004 [2024-11-25 12:21:15.941889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:15.004 [2024-11-25 12:21:15.941896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:15.004 [2024-11-25 12:21:15.941904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:15.004 [2024-11-25 12:21:15.941911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:15.004 [2024-11-25 12:21:15.941918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:15.004 [2024-11-25 12:21:15.941925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:15.004 [2024-11-25 12:21:15.941932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:15.004 [2024-11-25 12:21:15.941940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:15.004 [2024-11-25 12:21:15.941958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:15.004 [2024-11-25 12:21:15.941967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:15.004 [2024-11-25 12:21:15.941975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:15.004 [2024-11-25 12:21:15.941983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:15.004 [2024-11-25 12:21:15.941990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:15.004 [2024-11-25 12:21:15.941998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:15.004 [2024-11-25 12:21:15.942005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:15.004 [2024-11-25 12:21:15.942013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:15.004 [2024-11-25 12:21:15.942020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:15.004 [2024-11-25 12:21:15.942027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:15.004 [2024-11-25 12:21:15.942035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:15.004 [2024-11-25 12:21:15.942043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:15.004 [2024-11-25 12:21:15.942050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:15.004 [2024-11-25 12:21:15.942059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:15.004 [2024-11-25 12:21:15.942076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:15.004 [2024-11-25 12:21:15.942083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:15.004 [2024-11-25 12:21:15.942090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:15.004 [2024-11-25 12:21:15.942098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:15.004 [2024-11-25 12:21:15.942105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:15.004 [2024-11-25 12:21:15.942112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:15.004 [2024-11-25 12:21:15.942119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:15.004 [2024-11-25 12:21:15.942126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:15.004 [2024-11-25 12:21:15.942133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:15.004 [2024-11-25 12:21:15.942141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:15.004 [2024-11-25 12:21:15.942148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:15.004 [2024-11-25 12:21:15.942156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:15.004 [2024-11-25 12:21:15.942163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:15.004 [2024-11-25 12:21:15.942170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:15.004 [2024-11-25 12:21:15.942178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:15.004 [2024-11-25 12:21:15.942185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:15.004 [2024-11-25 12:21:15.942192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:15.004 [2024-11-25 12:21:15.942200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:15.004 [2024-11-25 12:21:15.942208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:15.004 [2024-11-25 12:21:15.942215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:15.004 [2024-11-25 12:21:15.942223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:15.004 [2024-11-25 12:21:15.942230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:15.004 [2024-11-25 12:21:15.942237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:15.004 [2024-11-25 12:21:15.942252] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:15.004 [2024-11-25 12:21:15.942259] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f81f1b95-15f5-4eae-9c10-92a99a1dcc63 00:23:15.004 [2024-11-25 12:21:15.942267] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:23:15.004 [2024-11-25 12:21:15.942274] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 12736 00:23:15.004 [2024-11-25 12:21:15.942280] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 11776 00:23:15.004 [2024-11-25 12:21:15.942292] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0815 00:23:15.004 [2024-11-25 12:21:15.942299] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:15.004 [2024-11-25 12:21:15.942309] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:15.004 [2024-11-25 12:21:15.942316] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:15.004 [2024-11-25 12:21:15.942328] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:15.004 [2024-11-25 12:21:15.942334] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:15.004 [2024-11-25 12:21:15.942341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.004 [2024-11-25 12:21:15.942349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:15.004 [2024-11-25 12:21:15.942357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.904 ms 00:23:15.004 [2024-11-25 12:21:15.942364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.004 [2024-11-25 12:21:15.954676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.004 [2024-11-25 12:21:15.954705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:15.004 [2024-11-25 12:21:15.954716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.297 ms 00:23:15.004 [2024-11-25 12:21:15.954727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.004 [2024-11-25 12:21:15.955092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.004 [2024-11-25 12:21:15.955107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:15.004 [2024-11-25 12:21:15.955115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.333 ms 00:23:15.004 [2024-11-25 12:21:15.955122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.004 [2024-11-25 12:21:15.987381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:15.004 [2024-11-25 12:21:15.987431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:15.004 [2024-11-25 12:21:15.987445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:15.004 [2024-11-25 12:21:15.987452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.004 [2024-11-25 12:21:15.987509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:15.005 [2024-11-25 12:21:15.987517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:15.005 [2024-11-25 12:21:15.987525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:15.005 [2024-11-25 12:21:15.987531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.005 [2024-11-25 12:21:15.987584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:15.005 [2024-11-25 12:21:15.987594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:15.005 [2024-11-25 12:21:15.987601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:15.005 [2024-11-25 12:21:15.987611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.005 [2024-11-25 12:21:15.987625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:15.005 [2024-11-25 12:21:15.987633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:15.005 [2024-11-25 12:21:15.987640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:15.005 [2024-11-25 12:21:15.987647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.005 [2024-11-25 12:21:16.064619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:15.005 [2024-11-25 12:21:16.064796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:15.005 [2024-11-25 12:21:16.064818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:15.005 [2024-11-25 12:21:16.064826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.263 [2024-11-25 12:21:16.127654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:15.263 [2024-11-25 12:21:16.127833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:15.263 [2024-11-25 12:21:16.127850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:15.263 [2024-11-25 12:21:16.127858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.263 [2024-11-25 12:21:16.127925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:15.263 [2024-11-25 12:21:16.127935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:15.263 [2024-11-25 12:21:16.127943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:15.263 [2024-11-25 12:21:16.127972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.263 [2024-11-25 12:21:16.128010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:15.263 [2024-11-25 12:21:16.128018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:15.263 [2024-11-25 12:21:16.128026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:15.263 [2024-11-25 12:21:16.128034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.263 [2024-11-25 12:21:16.128121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:15.263 [2024-11-25 12:21:16.128131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:15.263 [2024-11-25 12:21:16.128139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:15.263 [2024-11-25 12:21:16.128147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.263 [2024-11-25 12:21:16.128180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:15.263 [2024-11-25 12:21:16.128189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:15.263 [2024-11-25 12:21:16.128196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:15.263 [2024-11-25 12:21:16.128203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.263 [2024-11-25 12:21:16.128235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:15.263 [2024-11-25 12:21:16.128244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:15.263 [2024-11-25 12:21:16.128251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:15.263 [2024-11-25 12:21:16.128258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.263 [2024-11-25 12:21:16.128300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:15.263 [2024-11-25 12:21:16.128310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:15.263 [2024-11-25 12:21:16.128318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:15.263 [2024-11-25 12:21:16.128325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.263 [2024-11-25 12:21:16.128434] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 400.647 ms, result 0 00:23:15.828 00:23:15.828 00:23:15.828 12:21:16 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:18.353 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:23:18.353 12:21:18 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:23:18.353 12:21:18 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:23:18.353 12:21:18 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:23:18.353 12:21:19 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:18.353 12:21:19 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:18.353 12:21:19 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 77122 00:23:18.353 12:21:19 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 77122 ']' 00:23:18.353 Process with pid 77122 is not found 00:23:18.353 12:21:19 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 77122 00:23:18.353 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (77122) - No such process 00:23:18.353 12:21:19 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 77122 is not found' 00:23:18.353 12:21:19 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:23:18.353 Remove shared memory files 00:23:18.353 12:21:19 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:23:18.353 12:21:19 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:23:18.353 12:21:19 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:23:18.353 12:21:19 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:23:18.353 12:21:19 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:23:18.353 12:21:19 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:23:18.353 ************************************ 00:23:18.353 END TEST ftl_restore 00:23:18.353 ************************************ 00:23:18.353 00:23:18.353 real 2m21.751s 00:23:18.353 user 2m10.765s 00:23:18.353 sys 0m11.714s 00:23:18.353 12:21:19 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:18.353 12:21:19 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:23:18.353 12:21:19 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:23:18.353 12:21:19 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:23:18.353 12:21:19 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:18.353 12:21:19 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:18.353 ************************************ 00:23:18.353 START TEST ftl_dirty_shutdown 00:23:18.353 ************************************ 00:23:18.353 12:21:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:23:18.353 * Looking for test storage... 00:23:18.353 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:18.353 12:21:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:18.353 12:21:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:18.353 12:21:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:23:18.353 12:21:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:18.353 12:21:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:18.353 12:21:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:18.353 12:21:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:18.353 12:21:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:23:18.353 12:21:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:23:18.353 12:21:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:23:18.353 12:21:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:23:18.353 12:21:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:23:18.353 12:21:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:23:18.353 12:21:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:23:18.353 12:21:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:18.353 12:21:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:23:18.353 12:21:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:23:18.353 12:21:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:18.353 12:21:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:18.353 12:21:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:23:18.353 12:21:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:23:18.353 12:21:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:18.353 12:21:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:23:18.353 12:21:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:23:18.353 12:21:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:23:18.353 12:21:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:23:18.353 12:21:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:18.353 12:21:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:23:18.354 12:21:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:23:18.354 12:21:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:18.354 12:21:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:18.354 12:21:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:23:18.354 12:21:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:18.354 12:21:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:18.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:18.354 --rc genhtml_branch_coverage=1 00:23:18.354 --rc genhtml_function_coverage=1 00:23:18.354 --rc genhtml_legend=1 00:23:18.354 --rc geninfo_all_blocks=1 00:23:18.354 --rc geninfo_unexecuted_blocks=1 00:23:18.354 00:23:18.354 ' 00:23:18.354 12:21:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:18.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:18.354 --rc genhtml_branch_coverage=1 00:23:18.354 --rc genhtml_function_coverage=1 00:23:18.354 --rc genhtml_legend=1 00:23:18.354 --rc geninfo_all_blocks=1 00:23:18.354 --rc geninfo_unexecuted_blocks=1 00:23:18.354 00:23:18.354 ' 00:23:18.354 12:21:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:18.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:18.354 --rc genhtml_branch_coverage=1 00:23:18.354 --rc genhtml_function_coverage=1 00:23:18.354 --rc genhtml_legend=1 00:23:18.354 --rc geninfo_all_blocks=1 00:23:18.354 --rc geninfo_unexecuted_blocks=1 00:23:18.354 00:23:18.354 ' 00:23:18.354 12:21:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:18.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:18.354 --rc genhtml_branch_coverage=1 00:23:18.354 --rc genhtml_function_coverage=1 00:23:18.354 --rc genhtml_legend=1 00:23:18.354 --rc geninfo_all_blocks=1 00:23:18.354 --rc geninfo_unexecuted_blocks=1 00:23:18.354 00:23:18.354 ' 00:23:18.354 12:21:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:18.354 12:21:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:23:18.354 12:21:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:18.354 12:21:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:18.354 12:21:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:18.354 12:21:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:18.354 12:21:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:18.354 12:21:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:18.354 12:21:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:18.354 12:21:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:18.354 12:21:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:18.354 12:21:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:18.354 12:21:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:18.354 12:21:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:18.354 12:21:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:18.354 12:21:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:18.354 12:21:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:18.354 12:21:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:18.354 12:21:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:18.354 12:21:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:18.354 12:21:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:18.354 12:21:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:18.354 12:21:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:18.354 12:21:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:18.354 12:21:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:18.354 12:21:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:18.354 12:21:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:18.354 12:21:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:18.354 12:21:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:18.354 12:21:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:18.354 12:21:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:18.354 12:21:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:23:18.354 12:21:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:23:18.354 12:21:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:23:18.354 12:21:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:23:18.354 12:21:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:23:18.354 12:21:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:23:18.354 12:21:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:23:18.354 12:21:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:23:18.354 12:21:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:23:18.354 12:21:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:23:18.354 12:21:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:23:18.354 12:21:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=78586 00:23:18.354 12:21:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 78586 00:23:18.354 12:21:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 78586 ']' 00:23:18.354 12:21:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:23:18.354 12:21:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:18.354 12:21:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:18.354 12:21:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:18.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:18.354 12:21:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:18.354 12:21:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:18.354 [2024-11-25 12:21:19.314801] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:23:18.354 [2024-11-25 12:21:19.315088] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78586 ] 00:23:18.611 [2024-11-25 12:21:19.473198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.611 [2024-11-25 12:21:19.571875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:19.175 12:21:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:19.175 12:21:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:23:19.175 12:21:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:23:19.175 12:21:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:23:19.175 12:21:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:23:19.175 12:21:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:23:19.175 12:21:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:23:19.175 12:21:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:23:19.432 12:21:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:23:19.432 12:21:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:23:19.432 12:21:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:23:19.432 12:21:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:23:19.432 12:21:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:19.432 12:21:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:23:19.432 12:21:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:23:19.432 12:21:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:23:19.690 12:21:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:19.690 { 00:23:19.690 "name": "nvme0n1", 00:23:19.690 "aliases": [ 00:23:19.690 "4501a1ed-571d-48c6-ac6a-f52e8fc76e51" 00:23:19.690 ], 00:23:19.690 "product_name": "NVMe disk", 00:23:19.690 "block_size": 4096, 00:23:19.690 "num_blocks": 1310720, 00:23:19.690 "uuid": "4501a1ed-571d-48c6-ac6a-f52e8fc76e51", 00:23:19.690 "numa_id": -1, 00:23:19.690 "assigned_rate_limits": { 00:23:19.690 "rw_ios_per_sec": 0, 00:23:19.690 "rw_mbytes_per_sec": 0, 00:23:19.690 "r_mbytes_per_sec": 0, 00:23:19.690 "w_mbytes_per_sec": 0 00:23:19.690 }, 00:23:19.690 "claimed": true, 00:23:19.690 "claim_type": "read_many_write_one", 00:23:19.690 "zoned": false, 00:23:19.690 "supported_io_types": { 00:23:19.690 "read": true, 00:23:19.690 "write": true, 00:23:19.690 "unmap": true, 00:23:19.690 "flush": true, 00:23:19.690 "reset": true, 00:23:19.690 "nvme_admin": true, 00:23:19.690 "nvme_io": true, 00:23:19.690 "nvme_io_md": false, 00:23:19.690 "write_zeroes": true, 00:23:19.690 "zcopy": false, 00:23:19.690 "get_zone_info": false, 00:23:19.690 "zone_management": false, 00:23:19.690 "zone_append": false, 00:23:19.690 "compare": true, 00:23:19.690 "compare_and_write": false, 00:23:19.690 "abort": true, 00:23:19.690 "seek_hole": false, 00:23:19.690 "seek_data": false, 00:23:19.690 "copy": true, 00:23:19.690 "nvme_iov_md": false 00:23:19.690 }, 00:23:19.690 "driver_specific": { 00:23:19.690 "nvme": [ 00:23:19.690 { 00:23:19.690 "pci_address": "0000:00:11.0", 00:23:19.690 "trid": { 00:23:19.690 "trtype": "PCIe", 00:23:19.690 "traddr": "0000:00:11.0" 00:23:19.690 }, 00:23:19.690 "ctrlr_data": { 00:23:19.690 "cntlid": 0, 00:23:19.690 "vendor_id": "0x1b36", 00:23:19.690 "model_number": "QEMU NVMe Ctrl", 00:23:19.690 "serial_number": "12341", 00:23:19.690 "firmware_revision": "8.0.0", 00:23:19.690 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:19.690 "oacs": { 00:23:19.690 "security": 0, 00:23:19.690 "format": 1, 00:23:19.690 "firmware": 0, 00:23:19.690 "ns_manage": 1 00:23:19.690 }, 00:23:19.690 "multi_ctrlr": false, 00:23:19.690 "ana_reporting": false 00:23:19.690 }, 00:23:19.690 "vs": { 00:23:19.690 "nvme_version": "1.4" 00:23:19.690 }, 00:23:19.690 "ns_data": { 00:23:19.690 "id": 1, 00:23:19.690 "can_share": false 00:23:19.690 } 00:23:19.690 } 00:23:19.690 ], 00:23:19.690 "mp_policy": "active_passive" 00:23:19.690 } 00:23:19.690 } 00:23:19.690 ]' 00:23:19.690 12:21:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:19.690 12:21:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:23:19.690 12:21:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:19.690 12:21:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:23:19.690 12:21:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:23:19.690 12:21:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:23:19.690 12:21:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:23:19.690 12:21:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:23:19.690 12:21:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:23:19.690 12:21:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:19.690 12:21:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:23:19.948 12:21:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=f71f7c34-bd1e-41d8-8128-ddfc47421af3 00:23:19.948 12:21:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:23:19.948 12:21:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f71f7c34-bd1e-41d8-8128-ddfc47421af3 00:23:20.206 12:21:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:23:20.464 12:21:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=26a1c1e9-18d7-4e13-8ad2-7f32c7af9c68 00:23:20.464 12:21:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 26a1c1e9-18d7-4e13-8ad2-7f32c7af9c68 00:23:20.725 12:21:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=dd15d2cf-cd5f-4d7b-b18d-9d9f9624816d 00:23:20.725 12:21:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:23:20.725 12:21:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 dd15d2cf-cd5f-4d7b-b18d-9d9f9624816d 00:23:20.725 12:21:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:23:20.725 12:21:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:23:20.725 12:21:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=dd15d2cf-cd5f-4d7b-b18d-9d9f9624816d 00:23:20.725 12:21:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:23:20.725 12:21:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size dd15d2cf-cd5f-4d7b-b18d-9d9f9624816d 00:23:20.725 12:21:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=dd15d2cf-cd5f-4d7b-b18d-9d9f9624816d 00:23:20.725 12:21:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:20.725 12:21:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:23:20.725 12:21:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:23:20.725 12:21:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b dd15d2cf-cd5f-4d7b-b18d-9d9f9624816d 00:23:20.725 12:21:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:20.725 { 00:23:20.725 "name": "dd15d2cf-cd5f-4d7b-b18d-9d9f9624816d", 00:23:20.725 "aliases": [ 00:23:20.725 "lvs/nvme0n1p0" 00:23:20.725 ], 00:23:20.725 "product_name": "Logical Volume", 00:23:20.725 "block_size": 4096, 00:23:20.725 "num_blocks": 26476544, 00:23:20.725 "uuid": "dd15d2cf-cd5f-4d7b-b18d-9d9f9624816d", 00:23:20.725 "assigned_rate_limits": { 00:23:20.725 "rw_ios_per_sec": 0, 00:23:20.725 "rw_mbytes_per_sec": 0, 00:23:20.725 "r_mbytes_per_sec": 0, 00:23:20.725 "w_mbytes_per_sec": 0 00:23:20.725 }, 00:23:20.725 "claimed": false, 00:23:20.725 "zoned": false, 00:23:20.725 "supported_io_types": { 00:23:20.725 "read": true, 00:23:20.725 "write": true, 00:23:20.725 "unmap": true, 00:23:20.725 "flush": false, 00:23:20.725 "reset": true, 00:23:20.725 "nvme_admin": false, 00:23:20.725 "nvme_io": false, 00:23:20.725 "nvme_io_md": false, 00:23:20.725 "write_zeroes": true, 00:23:20.725 "zcopy": false, 00:23:20.725 "get_zone_info": false, 00:23:20.725 "zone_management": false, 00:23:20.725 "zone_append": false, 00:23:20.725 "compare": false, 00:23:20.725 "compare_and_write": false, 00:23:20.725 "abort": false, 00:23:20.725 "seek_hole": true, 00:23:20.725 "seek_data": true, 00:23:20.725 "copy": false, 00:23:20.725 "nvme_iov_md": false 00:23:20.725 }, 00:23:20.725 "driver_specific": { 00:23:20.725 "lvol": { 00:23:20.725 "lvol_store_uuid": "26a1c1e9-18d7-4e13-8ad2-7f32c7af9c68", 00:23:20.726 "base_bdev": "nvme0n1", 00:23:20.726 "thin_provision": true, 00:23:20.726 "num_allocated_clusters": 0, 00:23:20.726 "snapshot": false, 00:23:20.726 "clone": false, 00:23:20.726 "esnap_clone": false 00:23:20.726 } 00:23:20.726 } 00:23:20.726 } 00:23:20.726 ]' 00:23:20.726 12:21:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:20.726 12:21:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:23:20.726 12:21:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:20.984 12:21:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:20.984 12:21:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:20.984 12:21:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:23:20.984 12:21:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:23:20.984 12:21:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:23:20.984 12:21:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:23:21.242 12:21:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:23:21.242 12:21:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:23:21.242 12:21:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size dd15d2cf-cd5f-4d7b-b18d-9d9f9624816d 00:23:21.242 12:21:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=dd15d2cf-cd5f-4d7b-b18d-9d9f9624816d 00:23:21.242 12:21:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:21.242 12:21:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:23:21.242 12:21:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:23:21.242 12:21:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b dd15d2cf-cd5f-4d7b-b18d-9d9f9624816d 00:23:21.242 12:21:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:21.242 { 00:23:21.242 "name": "dd15d2cf-cd5f-4d7b-b18d-9d9f9624816d", 00:23:21.242 "aliases": [ 00:23:21.242 "lvs/nvme0n1p0" 00:23:21.242 ], 00:23:21.242 "product_name": "Logical Volume", 00:23:21.242 "block_size": 4096, 00:23:21.242 "num_blocks": 26476544, 00:23:21.242 "uuid": "dd15d2cf-cd5f-4d7b-b18d-9d9f9624816d", 00:23:21.242 "assigned_rate_limits": { 00:23:21.242 "rw_ios_per_sec": 0, 00:23:21.242 "rw_mbytes_per_sec": 0, 00:23:21.242 "r_mbytes_per_sec": 0, 00:23:21.242 "w_mbytes_per_sec": 0 00:23:21.242 }, 00:23:21.242 "claimed": false, 00:23:21.242 "zoned": false, 00:23:21.242 "supported_io_types": { 00:23:21.242 "read": true, 00:23:21.242 "write": true, 00:23:21.242 "unmap": true, 00:23:21.242 "flush": false, 00:23:21.242 "reset": true, 00:23:21.242 "nvme_admin": false, 00:23:21.242 "nvme_io": false, 00:23:21.242 "nvme_io_md": false, 00:23:21.242 "write_zeroes": true, 00:23:21.242 "zcopy": false, 00:23:21.242 "get_zone_info": false, 00:23:21.242 "zone_management": false, 00:23:21.242 "zone_append": false, 00:23:21.242 "compare": false, 00:23:21.242 "compare_and_write": false, 00:23:21.242 "abort": false, 00:23:21.242 "seek_hole": true, 00:23:21.242 "seek_data": true, 00:23:21.242 "copy": false, 00:23:21.242 "nvme_iov_md": false 00:23:21.242 }, 00:23:21.242 "driver_specific": { 00:23:21.242 "lvol": { 00:23:21.242 "lvol_store_uuid": "26a1c1e9-18d7-4e13-8ad2-7f32c7af9c68", 00:23:21.242 "base_bdev": "nvme0n1", 00:23:21.242 "thin_provision": true, 00:23:21.242 "num_allocated_clusters": 0, 00:23:21.242 "snapshot": false, 00:23:21.242 "clone": false, 00:23:21.242 "esnap_clone": false 00:23:21.242 } 00:23:21.242 } 00:23:21.242 } 00:23:21.242 ]' 00:23:21.242 12:21:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:21.501 12:21:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:23:21.501 12:21:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:21.501 12:21:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:21.501 12:21:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:21.501 12:21:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:23:21.501 12:21:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:23:21.501 12:21:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:23:21.501 12:21:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:23:21.501 12:21:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size dd15d2cf-cd5f-4d7b-b18d-9d9f9624816d 00:23:21.501 12:21:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=dd15d2cf-cd5f-4d7b-b18d-9d9f9624816d 00:23:21.501 12:21:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:21.501 12:21:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:23:21.501 12:21:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:23:21.501 12:21:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b dd15d2cf-cd5f-4d7b-b18d-9d9f9624816d 00:23:21.760 12:21:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:21.760 { 00:23:21.760 "name": "dd15d2cf-cd5f-4d7b-b18d-9d9f9624816d", 00:23:21.760 "aliases": [ 00:23:21.760 "lvs/nvme0n1p0" 00:23:21.760 ], 00:23:21.760 "product_name": "Logical Volume", 00:23:21.760 "block_size": 4096, 00:23:21.760 "num_blocks": 26476544, 00:23:21.760 "uuid": "dd15d2cf-cd5f-4d7b-b18d-9d9f9624816d", 00:23:21.760 "assigned_rate_limits": { 00:23:21.760 "rw_ios_per_sec": 0, 00:23:21.760 "rw_mbytes_per_sec": 0, 00:23:21.760 "r_mbytes_per_sec": 0, 00:23:21.760 "w_mbytes_per_sec": 0 00:23:21.760 }, 00:23:21.760 "claimed": false, 00:23:21.760 "zoned": false, 00:23:21.760 "supported_io_types": { 00:23:21.760 "read": true, 00:23:21.760 "write": true, 00:23:21.760 "unmap": true, 00:23:21.760 "flush": false, 00:23:21.760 "reset": true, 00:23:21.760 "nvme_admin": false, 00:23:21.760 "nvme_io": false, 00:23:21.760 "nvme_io_md": false, 00:23:21.760 "write_zeroes": true, 00:23:21.760 "zcopy": false, 00:23:21.760 "get_zone_info": false, 00:23:21.760 "zone_management": false, 00:23:21.760 "zone_append": false, 00:23:21.760 "compare": false, 00:23:21.760 "compare_and_write": false, 00:23:21.760 "abort": false, 00:23:21.760 "seek_hole": true, 00:23:21.760 "seek_data": true, 00:23:21.760 "copy": false, 00:23:21.760 "nvme_iov_md": false 00:23:21.760 }, 00:23:21.760 "driver_specific": { 00:23:21.760 "lvol": { 00:23:21.760 "lvol_store_uuid": "26a1c1e9-18d7-4e13-8ad2-7f32c7af9c68", 00:23:21.760 "base_bdev": "nvme0n1", 00:23:21.760 "thin_provision": true, 00:23:21.760 "num_allocated_clusters": 0, 00:23:21.760 "snapshot": false, 00:23:21.760 "clone": false, 00:23:21.760 "esnap_clone": false 00:23:21.760 } 00:23:21.760 } 00:23:21.760 } 00:23:21.760 ]' 00:23:21.760 12:21:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:21.760 12:21:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:23:21.760 12:21:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:21.760 12:21:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:21.760 12:21:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:21.760 12:21:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:23:21.760 12:21:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:23:21.760 12:21:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d dd15d2cf-cd5f-4d7b-b18d-9d9f9624816d --l2p_dram_limit 10' 00:23:21.760 12:21:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:23:21.760 12:21:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:23:21.760 12:21:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:23:21.760 12:21:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d dd15d2cf-cd5f-4d7b-b18d-9d9f9624816d --l2p_dram_limit 10 -c nvc0n1p0 00:23:22.020 [2024-11-25 12:21:22.999219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.020 [2024-11-25 12:21:22.999264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:22.020 [2024-11-25 12:21:22.999279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:22.020 [2024-11-25 12:21:22.999286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.020 [2024-11-25 12:21:22.999330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.020 [2024-11-25 12:21:22.999338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:22.020 [2024-11-25 12:21:22.999346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:23:22.020 [2024-11-25 12:21:22.999352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.020 [2024-11-25 12:21:22.999373] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:22.020 [2024-11-25 12:21:23.000021] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:22.020 [2024-11-25 12:21:23.000039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.020 [2024-11-25 12:21:23.000045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:22.020 [2024-11-25 12:21:23.000053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.671 ms 00:23:22.020 [2024-11-25 12:21:23.000060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.020 [2024-11-25 12:21:23.000133] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID aed19bdd-3515-46f8-96d0-b2af87e28583 00:23:22.020 [2024-11-25 12:21:23.001106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.020 [2024-11-25 12:21:23.001128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:23:22.020 [2024-11-25 12:21:23.001136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:23:22.020 [2024-11-25 12:21:23.001144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.020 [2024-11-25 12:21:23.005863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.020 [2024-11-25 12:21:23.005891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:22.020 [2024-11-25 12:21:23.005901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.681 ms 00:23:22.020 [2024-11-25 12:21:23.005908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.020 [2024-11-25 12:21:23.005987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.020 [2024-11-25 12:21:23.005996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:22.020 [2024-11-25 12:21:23.006003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:23:22.020 [2024-11-25 12:21:23.006012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.020 [2024-11-25 12:21:23.006047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.020 [2024-11-25 12:21:23.006057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:22.020 [2024-11-25 12:21:23.006063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:22.020 [2024-11-25 12:21:23.006072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.020 [2024-11-25 12:21:23.006090] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:22.020 [2024-11-25 12:21:23.009061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.020 [2024-11-25 12:21:23.009088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:22.020 [2024-11-25 12:21:23.009099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.973 ms 00:23:22.020 [2024-11-25 12:21:23.009105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.020 [2024-11-25 12:21:23.009133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.020 [2024-11-25 12:21:23.009140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:22.020 [2024-11-25 12:21:23.009147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:22.020 [2024-11-25 12:21:23.009154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.020 [2024-11-25 12:21:23.009169] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:23:22.020 [2024-11-25 12:21:23.009279] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:22.020 [2024-11-25 12:21:23.009291] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:22.020 [2024-11-25 12:21:23.009300] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:22.020 [2024-11-25 12:21:23.009310] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:22.020 [2024-11-25 12:21:23.009317] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:22.020 [2024-11-25 12:21:23.009325] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:22.020 [2024-11-25 12:21:23.009331] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:22.020 [2024-11-25 12:21:23.009340] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:22.020 [2024-11-25 12:21:23.009346] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:22.020 [2024-11-25 12:21:23.009353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.020 [2024-11-25 12:21:23.009358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:22.020 [2024-11-25 12:21:23.009366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.185 ms 00:23:22.020 [2024-11-25 12:21:23.009376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.020 [2024-11-25 12:21:23.009445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.020 [2024-11-25 12:21:23.009452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:22.020 [2024-11-25 12:21:23.009459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:23:22.020 [2024-11-25 12:21:23.009465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.020 [2024-11-25 12:21:23.009552] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:22.020 [2024-11-25 12:21:23.009560] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:22.020 [2024-11-25 12:21:23.009568] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:22.020 [2024-11-25 12:21:23.009574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:22.020 [2024-11-25 12:21:23.009582] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:22.020 [2024-11-25 12:21:23.009588] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:22.020 [2024-11-25 12:21:23.009594] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:22.020 [2024-11-25 12:21:23.009600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:22.020 [2024-11-25 12:21:23.009607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:22.020 [2024-11-25 12:21:23.009612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:22.020 [2024-11-25 12:21:23.009619] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:22.020 [2024-11-25 12:21:23.009626] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:22.020 [2024-11-25 12:21:23.009632] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:22.020 [2024-11-25 12:21:23.009638] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:22.020 [2024-11-25 12:21:23.009644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:22.020 [2024-11-25 12:21:23.009650] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:22.020 [2024-11-25 12:21:23.009658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:22.020 [2024-11-25 12:21:23.009663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:22.020 [2024-11-25 12:21:23.009670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:22.020 [2024-11-25 12:21:23.009675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:22.020 [2024-11-25 12:21:23.009683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:22.020 [2024-11-25 12:21:23.009689] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:22.020 [2024-11-25 12:21:23.009695] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:22.020 [2024-11-25 12:21:23.009702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:22.020 [2024-11-25 12:21:23.009709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:22.020 [2024-11-25 12:21:23.009714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:22.020 [2024-11-25 12:21:23.009721] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:22.020 [2024-11-25 12:21:23.009726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:22.020 [2024-11-25 12:21:23.009733] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:22.020 [2024-11-25 12:21:23.009738] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:22.020 [2024-11-25 12:21:23.009745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:22.020 [2024-11-25 12:21:23.009750] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:22.020 [2024-11-25 12:21:23.009758] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:22.020 [2024-11-25 12:21:23.009763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:22.020 [2024-11-25 12:21:23.009770] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:22.020 [2024-11-25 12:21:23.009775] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:22.021 [2024-11-25 12:21:23.009781] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:22.021 [2024-11-25 12:21:23.009786] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:22.021 [2024-11-25 12:21:23.009793] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:22.021 [2024-11-25 12:21:23.009799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:22.021 [2024-11-25 12:21:23.009805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:22.021 [2024-11-25 12:21:23.009810] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:22.021 [2024-11-25 12:21:23.009817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:22.021 [2024-11-25 12:21:23.009822] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:22.021 [2024-11-25 12:21:23.009829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:22.021 [2024-11-25 12:21:23.009835] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:22.021 [2024-11-25 12:21:23.009842] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:22.021 [2024-11-25 12:21:23.009848] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:22.021 [2024-11-25 12:21:23.009857] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:22.021 [2024-11-25 12:21:23.009862] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:22.021 [2024-11-25 12:21:23.009869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:22.021 [2024-11-25 12:21:23.009874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:22.021 [2024-11-25 12:21:23.009881] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:22.021 [2024-11-25 12:21:23.009889] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:22.021 [2024-11-25 12:21:23.009897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:22.021 [2024-11-25 12:21:23.009908] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:22.021 [2024-11-25 12:21:23.009916] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:22.021 [2024-11-25 12:21:23.009922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:22.021 [2024-11-25 12:21:23.009929] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:22.021 [2024-11-25 12:21:23.009934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:22.021 [2024-11-25 12:21:23.009941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:22.021 [2024-11-25 12:21:23.010147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:22.021 [2024-11-25 12:21:23.010186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:22.021 [2024-11-25 12:21:23.010211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:22.021 [2024-11-25 12:21:23.010237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:22.021 [2024-11-25 12:21:23.010260] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:22.021 [2024-11-25 12:21:23.010323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:22.021 [2024-11-25 12:21:23.010349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:22.021 [2024-11-25 12:21:23.010374] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:22.021 [2024-11-25 12:21:23.010397] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:22.021 [2024-11-25 12:21:23.010425] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:22.021 [2024-11-25 12:21:23.010449] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:22.021 [2024-11-25 12:21:23.010505] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:22.021 [2024-11-25 12:21:23.010530] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:22.021 [2024-11-25 12:21:23.010555] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:22.021 [2024-11-25 12:21:23.010579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.021 [2024-11-25 12:21:23.010597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:22.021 [2024-11-25 12:21:23.010641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.083 ms 00:23:22.021 [2024-11-25 12:21:23.010661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.021 [2024-11-25 12:21:23.010721] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:23:22.021 [2024-11-25 12:21:23.010754] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:23:24.549 [2024-11-25 12:21:25.332286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.549 [2024-11-25 12:21:25.332482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:23:24.549 [2024-11-25 12:21:25.332555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2321.555 ms 00:23:24.549 [2024-11-25 12:21:25.332583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.549 [2024-11-25 12:21:25.357434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.549 [2024-11-25 12:21:25.357631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:24.549 [2024-11-25 12:21:25.357694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.651 ms 00:23:24.549 [2024-11-25 12:21:25.357708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.549 [2024-11-25 12:21:25.357832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.549 [2024-11-25 12:21:25.357845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:24.549 [2024-11-25 12:21:25.357854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:23:24.549 [2024-11-25 12:21:25.357865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.549 [2024-11-25 12:21:25.387681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.549 [2024-11-25 12:21:25.387717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:24.549 [2024-11-25 12:21:25.387727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.780 ms 00:23:24.549 [2024-11-25 12:21:25.387736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.549 [2024-11-25 12:21:25.387762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.549 [2024-11-25 12:21:25.387775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:24.549 [2024-11-25 12:21:25.387783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:23:24.549 [2024-11-25 12:21:25.387792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.549 [2024-11-25 12:21:25.388140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.549 [2024-11-25 12:21:25.388158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:24.549 [2024-11-25 12:21:25.388167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:23:24.549 [2024-11-25 12:21:25.388176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.549 [2024-11-25 12:21:25.388278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.549 [2024-11-25 12:21:25.388289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:24.549 [2024-11-25 12:21:25.388298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:23:24.549 [2024-11-25 12:21:25.388308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.549 [2024-11-25 12:21:25.401912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.549 [2024-11-25 12:21:25.401960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:24.549 [2024-11-25 12:21:25.401969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.587 ms 00:23:24.549 [2024-11-25 12:21:25.401979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.549 [2024-11-25 12:21:25.413064] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:24.549 [2024-11-25 12:21:25.415619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.549 [2024-11-25 12:21:25.415645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:24.549 [2024-11-25 12:21:25.415658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.558 ms 00:23:24.549 [2024-11-25 12:21:25.415666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.549 [2024-11-25 12:21:25.489981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.549 [2024-11-25 12:21:25.490141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:23:24.549 [2024-11-25 12:21:25.490165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.287 ms 00:23:24.549 [2024-11-25 12:21:25.490174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.549 [2024-11-25 12:21:25.490348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.549 [2024-11-25 12:21:25.490361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:24.549 [2024-11-25 12:21:25.490374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:23:24.549 [2024-11-25 12:21:25.490381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.549 [2024-11-25 12:21:25.513001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.549 [2024-11-25 12:21:25.513128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:23:24.549 [2024-11-25 12:21:25.513148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.575 ms 00:23:24.549 [2024-11-25 12:21:25.513157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.549 [2024-11-25 12:21:25.534781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.549 [2024-11-25 12:21:25.534811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:23:24.549 [2024-11-25 12:21:25.534824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.589 ms 00:23:24.549 [2024-11-25 12:21:25.534831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.549 [2024-11-25 12:21:25.535400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.549 [2024-11-25 12:21:25.535421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:24.549 [2024-11-25 12:21:25.535431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.536 ms 00:23:24.549 [2024-11-25 12:21:25.535438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.549 [2024-11-25 12:21:25.602083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.549 [2024-11-25 12:21:25.602116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:23:24.549 [2024-11-25 12:21:25.602132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.610 ms 00:23:24.549 [2024-11-25 12:21:25.602141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.549 [2024-11-25 12:21:25.625437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.549 [2024-11-25 12:21:25.625474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:23:24.549 [2024-11-25 12:21:25.625505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.227 ms 00:23:24.549 [2024-11-25 12:21:25.625513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.808 [2024-11-25 12:21:25.649092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.808 [2024-11-25 12:21:25.649124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:23:24.808 [2024-11-25 12:21:25.649137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.535 ms 00:23:24.808 [2024-11-25 12:21:25.649144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.808 [2024-11-25 12:21:25.671841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.808 [2024-11-25 12:21:25.671876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:24.808 [2024-11-25 12:21:25.671889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.657 ms 00:23:24.808 [2024-11-25 12:21:25.671897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.808 [2024-11-25 12:21:25.671937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.808 [2024-11-25 12:21:25.671960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:24.808 [2024-11-25 12:21:25.671973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:24.808 [2024-11-25 12:21:25.671981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.808 [2024-11-25 12:21:25.672058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.808 [2024-11-25 12:21:25.672067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:24.808 [2024-11-25 12:21:25.672079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:23:24.808 [2024-11-25 12:21:25.672087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.808 [2024-11-25 12:21:25.673003] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2673.351 ms, result 0 00:23:24.808 { 00:23:24.808 "name": "ftl0", 00:23:24.808 "uuid": "aed19bdd-3515-46f8-96d0-b2af87e28583" 00:23:24.808 } 00:23:24.808 12:21:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:23:24.808 12:21:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:23:25.066 12:21:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:23:25.066 12:21:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:23:25.066 12:21:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:23:25.066 /dev/nbd0 00:23:25.066 12:21:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:23:25.066 12:21:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:23:25.066 12:21:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:23:25.066 12:21:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:25.066 12:21:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:25.066 12:21:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:23:25.066 12:21:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:23:25.066 12:21:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:25.066 12:21:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:25.066 12:21:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:23:25.066 1+0 records in 00:23:25.066 1+0 records out 00:23:25.066 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000279286 s, 14.7 MB/s 00:23:25.066 12:21:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:23:25.066 12:21:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:23:25.066 12:21:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:23:25.066 12:21:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:25.066 12:21:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:23:25.066 12:21:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:23:25.325 [2024-11-25 12:21:26.192164] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:23:25.325 [2024-11-25 12:21:26.192391] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78717 ] 00:23:25.325 [2024-11-25 12:21:26.352028] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:25.583 [2024-11-25 12:21:26.446254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:26.956  [2024-11-25T12:21:28.968Z] Copying: 196/1024 [MB] (196 MBps) [2024-11-25T12:21:29.900Z] Copying: 392/1024 [MB] (196 MBps) [2024-11-25T12:21:30.831Z] Copying: 601/1024 [MB] (208 MBps) [2024-11-25T12:21:31.765Z] Copying: 841/1024 [MB] (240 MBps) [2024-11-25T12:21:32.023Z] Copying: 1024/1024 [MB] (average 215 MBps) 00:23:30.943 00:23:30.943 12:21:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:23:33.516 12:21:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:23:33.516 [2024-11-25 12:21:34.182726] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:23:33.516 [2024-11-25 12:21:34.183022] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78804 ] 00:23:33.516 [2024-11-25 12:21:34.340796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.516 [2024-11-25 12:21:34.439122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:34.888  [2024-11-25T12:21:36.902Z] Copying: 33/1024 [MB] (33 MBps) [2024-11-25T12:21:37.835Z] Copying: 68/1024 [MB] (35 MBps) [2024-11-25T12:21:38.770Z] Copying: 100/1024 [MB] (32 MBps) [2024-11-25T12:21:39.703Z] Copying: 129/1024 [MB] (29 MBps) [2024-11-25T12:21:41.081Z] Copying: 159/1024 [MB] (29 MBps) [2024-11-25T12:21:42.017Z] Copying: 189/1024 [MB] (29 MBps) [2024-11-25T12:21:42.950Z] Copying: 219/1024 [MB] (29 MBps) [2024-11-25T12:21:43.965Z] Copying: 246/1024 [MB] (26 MBps) [2024-11-25T12:21:44.929Z] Copying: 275/1024 [MB] (29 MBps) [2024-11-25T12:21:45.862Z] Copying: 304/1024 [MB] (28 MBps) [2024-11-25T12:21:46.793Z] Copying: 332/1024 [MB] (28 MBps) [2024-11-25T12:21:47.725Z] Copying: 361/1024 [MB] (29 MBps) [2024-11-25T12:21:48.655Z] Copying: 391/1024 [MB] (30 MBps) [2024-11-25T12:21:50.028Z] Copying: 409/1024 [MB] (17 MBps) [2024-11-25T12:21:50.660Z] Copying: 423/1024 [MB] (14 MBps) [2024-11-25T12:21:52.032Z] Copying: 453/1024 [MB] (29 MBps) [2024-11-25T12:21:52.965Z] Copying: 485/1024 [MB] (32 MBps) [2024-11-25T12:21:53.896Z] Copying: 514/1024 [MB] (29 MBps) [2024-11-25T12:21:54.828Z] Copying: 544/1024 [MB] (29 MBps) [2024-11-25T12:21:55.761Z] Copying: 573/1024 [MB] (29 MBps) [2024-11-25T12:21:56.694Z] Copying: 602/1024 [MB] (28 MBps) [2024-11-25T12:21:58.074Z] Copying: 631/1024 [MB] (29 MBps) [2024-11-25T12:21:59.007Z] Copying: 661/1024 [MB] (29 MBps) [2024-11-25T12:21:59.958Z] Copying: 689/1024 [MB] (28 MBps) [2024-11-25T12:22:00.891Z] Copying: 718/1024 [MB] (28 MBps) [2024-11-25T12:22:01.825Z] Copying: 747/1024 [MB] (29 MBps) [2024-11-25T12:22:02.763Z] Copying: 775/1024 [MB] (27 MBps) [2024-11-25T12:22:03.703Z] Copying: 804/1024 [MB] (29 MBps) [2024-11-25T12:22:05.076Z] Copying: 833/1024 [MB] (28 MBps) [2024-11-25T12:22:06.008Z] Copying: 862/1024 [MB] (29 MBps) [2024-11-25T12:22:06.943Z] Copying: 892/1024 [MB] (30 MBps) [2024-11-25T12:22:07.876Z] Copying: 923/1024 [MB] (30 MBps) [2024-11-25T12:22:08.809Z] Copying: 953/1024 [MB] (29 MBps) [2024-11-25T12:22:09.754Z] Copying: 981/1024 [MB] (28 MBps) [2024-11-25T12:22:10.424Z] Copying: 1011/1024 [MB] (29 MBps) [2024-11-25T12:22:10.682Z] Copying: 1024/1024 [MB] (average 28 MBps) 00:24:09.602 00:24:09.602 12:22:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:24:09.602 12:22:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:24:09.860 12:22:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:24:10.118 [2024-11-25 12:22:11.055153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.118 [2024-11-25 12:22:11.055210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:10.118 [2024-11-25 12:22:11.055223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:10.118 [2024-11-25 12:22:11.055234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.118 [2024-11-25 12:22:11.055257] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:10.118 [2024-11-25 12:22:11.057904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.119 [2024-11-25 12:22:11.057935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:10.119 [2024-11-25 12:22:11.057955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.626 ms 00:24:10.119 [2024-11-25 12:22:11.057963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.119 [2024-11-25 12:22:11.059550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.119 [2024-11-25 12:22:11.059584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:10.119 [2024-11-25 12:22:11.059595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.556 ms 00:24:10.119 [2024-11-25 12:22:11.059603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.119 [2024-11-25 12:22:11.073199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.119 [2024-11-25 12:22:11.073232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:10.119 [2024-11-25 12:22:11.073245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.573 ms 00:24:10.119 [2024-11-25 12:22:11.073252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.119 [2024-11-25 12:22:11.079434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.119 [2024-11-25 12:22:11.079606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:10.119 [2024-11-25 12:22:11.079626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.147 ms 00:24:10.119 [2024-11-25 12:22:11.079634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.119 [2024-11-25 12:22:11.103425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.119 [2024-11-25 12:22:11.103612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:10.119 [2024-11-25 12:22:11.103877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.720 ms 00:24:10.119 [2024-11-25 12:22:11.103933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.119 [2024-11-25 12:22:11.124305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.119 [2024-11-25 12:22:11.124352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:10.119 [2024-11-25 12:22:11.124367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.204 ms 00:24:10.119 [2024-11-25 12:22:11.124378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.119 [2024-11-25 12:22:11.124571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.119 [2024-11-25 12:22:11.124583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:10.119 [2024-11-25 12:22:11.124596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.151 ms 00:24:10.119 [2024-11-25 12:22:11.124603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.119 [2024-11-25 12:22:11.147462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.119 [2024-11-25 12:22:11.147497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:10.119 [2024-11-25 12:22:11.147510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.838 ms 00:24:10.119 [2024-11-25 12:22:11.147518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.119 [2024-11-25 12:22:11.169718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.119 [2024-11-25 12:22:11.169749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:10.119 [2024-11-25 12:22:11.169762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.162 ms 00:24:10.119 [2024-11-25 12:22:11.169769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.119 [2024-11-25 12:22:11.191562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.119 [2024-11-25 12:22:11.191590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:10.119 [2024-11-25 12:22:11.191601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.756 ms 00:24:10.119 [2024-11-25 12:22:11.191608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.378 [2024-11-25 12:22:11.213397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.378 [2024-11-25 12:22:11.213544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:10.378 [2024-11-25 12:22:11.213565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.702 ms 00:24:10.378 [2024-11-25 12:22:11.213572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.378 [2024-11-25 12:22:11.213604] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:10.378 [2024-11-25 12:22:11.213617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:10.378 [2024-11-25 12:22:11.213629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:10.378 [2024-11-25 12:22:11.213637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:10.378 [2024-11-25 12:22:11.213646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:10.378 [2024-11-25 12:22:11.213654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:10.378 [2024-11-25 12:22:11.213662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:10.378 [2024-11-25 12:22:11.213670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:10.378 [2024-11-25 12:22:11.213681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:10.378 [2024-11-25 12:22:11.213688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:10.378 [2024-11-25 12:22:11.213697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:10.378 [2024-11-25 12:22:11.213704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:10.379 [2024-11-25 12:22:11.213713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:10.379 [2024-11-25 12:22:11.213720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:10.379 [2024-11-25 12:22:11.213729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:10.379 [2024-11-25 12:22:11.213736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:10.379 [2024-11-25 12:22:11.213745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:10.379 [2024-11-25 12:22:11.213752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:10.379 [2024-11-25 12:22:11.213760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:10.379 [2024-11-25 12:22:11.213767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:10.379 [2024-11-25 12:22:11.213776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:10.379 [2024-11-25 12:22:11.213783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:10.379 [2024-11-25 12:22:11.213795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:10.379 [2024-11-25 12:22:11.213802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:10.379 [2024-11-25 12:22:11.213813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:10.379 [2024-11-25 12:22:11.213820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:10.379 [2024-11-25 12:22:11.213829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:10.379 [2024-11-25 12:22:11.213836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:10.379 [2024-11-25 12:22:11.213846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:10.379 [2024-11-25 12:22:11.213854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:10.379 [2024-11-25 12:22:11.213863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:10.379 [2024-11-25 12:22:11.213870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:10.379 [2024-11-25 12:22:11.213879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:10.379 [2024-11-25 12:22:11.213886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:10.379 [2024-11-25 12:22:11.213895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:10.379 [2024-11-25 12:22:11.213903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:10.379 [2024-11-25 12:22:11.213912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:10.379 [2024-11-25 12:22:11.213919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:10.379 [2024-11-25 12:22:11.213928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:10.379 [2024-11-25 12:22:11.213935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:10.379 [2024-11-25 12:22:11.213962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:10.379 [2024-11-25 12:22:11.213970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:10.379 [2024-11-25 12:22:11.213979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:10.379 [2024-11-25 12:22:11.213987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:10.379 [2024-11-25 12:22:11.213995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:10.379 [2024-11-25 12:22:11.214003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:10.379 [2024-11-25 12:22:11.214012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:10.379 [2024-11-25 12:22:11.214019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:10.379 [2024-11-25 12:22:11.214030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:10.379 [2024-11-25 12:22:11.214043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:10.379 [2024-11-25 12:22:11.214051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:10.379 [2024-11-25 12:22:11.214059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:10.379 [2024-11-25 12:22:11.214067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:10.379 [2024-11-25 12:22:11.214075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:10.379 [2024-11-25 12:22:11.214084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:10.379 [2024-11-25 12:22:11.214091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:10.379 [2024-11-25 12:22:11.214101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:10.379 [2024-11-25 12:22:11.214109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:10.379 [2024-11-25 12:22:11.214118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:10.379 [2024-11-25 12:22:11.214125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:10.379 [2024-11-25 12:22:11.214134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:10.379 [2024-11-25 12:22:11.214142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:10.379 [2024-11-25 12:22:11.214152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:10.379 [2024-11-25 12:22:11.214159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:10.379 [2024-11-25 12:22:11.214169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:10.379 [2024-11-25 12:22:11.214176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:10.379 [2024-11-25 12:22:11.214185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:10.379 [2024-11-25 12:22:11.214192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:10.379 [2024-11-25 12:22:11.214201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:10.379 [2024-11-25 12:22:11.214207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:10.379 [2024-11-25 12:22:11.214216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:10.379 [2024-11-25 12:22:11.214223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:10.379 [2024-11-25 12:22:11.214234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:10.379 [2024-11-25 12:22:11.214242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:10.379 [2024-11-25 12:22:11.214251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:10.379 [2024-11-25 12:22:11.214258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:10.379 [2024-11-25 12:22:11.214266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:10.379 [2024-11-25 12:22:11.214273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:10.379 [2024-11-25 12:22:11.214282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:10.379 [2024-11-25 12:22:11.214289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:10.379 [2024-11-25 12:22:11.214298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:10.379 [2024-11-25 12:22:11.214305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:10.380 [2024-11-25 12:22:11.214314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:10.380 [2024-11-25 12:22:11.214322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:10.380 [2024-11-25 12:22:11.214330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:10.380 [2024-11-25 12:22:11.214337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:10.380 [2024-11-25 12:22:11.214345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:10.380 [2024-11-25 12:22:11.214353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:10.380 [2024-11-25 12:22:11.214364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:10.380 [2024-11-25 12:22:11.214371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:10.380 [2024-11-25 12:22:11.214379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:10.380 [2024-11-25 12:22:11.214386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:10.380 [2024-11-25 12:22:11.214395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:10.380 [2024-11-25 12:22:11.214403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:10.380 [2024-11-25 12:22:11.214412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:10.380 [2024-11-25 12:22:11.214419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:10.380 [2024-11-25 12:22:11.214428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:10.380 [2024-11-25 12:22:11.214435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:10.380 [2024-11-25 12:22:11.214444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:10.380 [2024-11-25 12:22:11.214451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:10.380 [2024-11-25 12:22:11.214461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:10.380 [2024-11-25 12:22:11.214477] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:10.380 [2024-11-25 12:22:11.214486] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: aed19bdd-3515-46f8-96d0-b2af87e28583 00:24:10.380 [2024-11-25 12:22:11.214494] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:10.380 [2024-11-25 12:22:11.214504] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:10.380 [2024-11-25 12:22:11.214510] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:10.380 [2024-11-25 12:22:11.214521] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:10.380 [2024-11-25 12:22:11.214528] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:10.380 [2024-11-25 12:22:11.214537] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:10.380 [2024-11-25 12:22:11.214544] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:10.380 [2024-11-25 12:22:11.214552] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:10.380 [2024-11-25 12:22:11.214558] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:10.380 [2024-11-25 12:22:11.214566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.380 [2024-11-25 12:22:11.214573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:10.380 [2024-11-25 12:22:11.214582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.963 ms 00:24:10.380 [2024-11-25 12:22:11.214589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.380 [2024-11-25 12:22:11.227071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.380 [2024-11-25 12:22:11.227099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:10.380 [2024-11-25 12:22:11.227113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.448 ms 00:24:10.380 [2024-11-25 12:22:11.227121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.380 [2024-11-25 12:22:11.227462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.380 [2024-11-25 12:22:11.227470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:10.380 [2024-11-25 12:22:11.227480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.318 ms 00:24:10.380 [2024-11-25 12:22:11.227487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.380 [2024-11-25 12:22:11.268983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:10.380 [2024-11-25 12:22:11.269023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:10.380 [2024-11-25 12:22:11.269036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:10.380 [2024-11-25 12:22:11.269044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.380 [2024-11-25 12:22:11.269106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:10.380 [2024-11-25 12:22:11.269114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:10.380 [2024-11-25 12:22:11.269123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:10.380 [2024-11-25 12:22:11.269131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.380 [2024-11-25 12:22:11.269202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:10.380 [2024-11-25 12:22:11.269212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:10.380 [2024-11-25 12:22:11.269224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:10.380 [2024-11-25 12:22:11.269231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.380 [2024-11-25 12:22:11.269252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:10.380 [2024-11-25 12:22:11.269259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:10.380 [2024-11-25 12:22:11.269269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:10.380 [2024-11-25 12:22:11.269275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.380 [2024-11-25 12:22:11.346129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:10.380 [2024-11-25 12:22:11.346176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:10.380 [2024-11-25 12:22:11.346188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:10.380 [2024-11-25 12:22:11.346196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.380 [2024-11-25 12:22:11.408913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:10.380 [2024-11-25 12:22:11.409087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:10.380 [2024-11-25 12:22:11.409107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:10.380 [2024-11-25 12:22:11.409115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.380 [2024-11-25 12:22:11.409203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:10.380 [2024-11-25 12:22:11.409213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:10.380 [2024-11-25 12:22:11.409222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:10.380 [2024-11-25 12:22:11.409233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.380 [2024-11-25 12:22:11.409281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:10.380 [2024-11-25 12:22:11.409290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:10.380 [2024-11-25 12:22:11.409299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:10.380 [2024-11-25 12:22:11.409306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.380 [2024-11-25 12:22:11.409394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:10.380 [2024-11-25 12:22:11.409404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:10.380 [2024-11-25 12:22:11.409413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:10.380 [2024-11-25 12:22:11.409420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.380 [2024-11-25 12:22:11.409454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:10.380 [2024-11-25 12:22:11.409462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:10.380 [2024-11-25 12:22:11.409472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:10.380 [2024-11-25 12:22:11.409479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.381 [2024-11-25 12:22:11.409521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:10.381 [2024-11-25 12:22:11.409530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:10.381 [2024-11-25 12:22:11.409539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:10.381 [2024-11-25 12:22:11.409547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.381 [2024-11-25 12:22:11.409593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:10.381 [2024-11-25 12:22:11.409602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:10.381 [2024-11-25 12:22:11.409611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:10.381 [2024-11-25 12:22:11.409618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.381 [2024-11-25 12:22:11.409738] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 354.554 ms, result 0 00:24:10.381 true 00:24:10.381 12:22:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 78586 00:24:10.381 12:22:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid78586 00:24:10.381 12:22:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:24:10.639 [2024-11-25 12:22:11.495578] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:24:10.639 [2024-11-25 12:22:11.495693] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79193 ] 00:24:10.639 [2024-11-25 12:22:11.655995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:10.898 [2024-11-25 12:22:11.752987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:12.269  [2024-11-25T12:22:14.281Z] Copying: 195/1024 [MB] (195 MBps) [2024-11-25T12:22:15.214Z] Copying: 392/1024 [MB] (196 MBps) [2024-11-25T12:22:16.177Z] Copying: 602/1024 [MB] (209 MBps) [2024-11-25T12:22:16.742Z] Copying: 840/1024 [MB] (238 MBps) [2024-11-25T12:22:17.308Z] Copying: 1024/1024 [MB] (average 214 MBps) 00:24:16.228 00:24:16.228 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 78586 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:24:16.228 12:22:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:16.487 [2024-11-25 12:22:17.374062] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:24:16.487 [2024-11-25 12:22:17.374343] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79257 ] 00:24:16.487 [2024-11-25 12:22:17.540246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:16.745 [2024-11-25 12:22:17.620265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:17.003 [2024-11-25 12:22:17.832269] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:17.003 [2024-11-25 12:22:17.832319] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:17.003 [2024-11-25 12:22:17.895161] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:24:17.003 [2024-11-25 12:22:17.895509] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:24:17.003 [2024-11-25 12:22:17.895719] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:24:17.003 [2024-11-25 12:22:18.068738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.003 [2024-11-25 12:22:18.068781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:17.003 [2024-11-25 12:22:18.068794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:17.003 [2024-11-25 12:22:18.068802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.003 [2024-11-25 12:22:18.068850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.003 [2024-11-25 12:22:18.068860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:17.003 [2024-11-25 12:22:18.068868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:24:17.003 [2024-11-25 12:22:18.068876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.003 [2024-11-25 12:22:18.068891] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:17.003 [2024-11-25 12:22:18.069667] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:17.003 [2024-11-25 12:22:18.069699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.003 [2024-11-25 12:22:18.069707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:17.003 [2024-11-25 12:22:18.069715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.811 ms 00:24:17.003 [2024-11-25 12:22:18.069723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.003 [2024-11-25 12:22:18.070795] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:17.262 [2024-11-25 12:22:18.082776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.262 [2024-11-25 12:22:18.082813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:17.262 [2024-11-25 12:22:18.082826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.982 ms 00:24:17.262 [2024-11-25 12:22:18.082835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.262 [2024-11-25 12:22:18.082887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.262 [2024-11-25 12:22:18.082897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:17.262 [2024-11-25 12:22:18.082905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:24:17.262 [2024-11-25 12:22:18.082912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.262 [2024-11-25 12:22:18.087577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.262 [2024-11-25 12:22:18.087607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:17.262 [2024-11-25 12:22:18.087616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.598 ms 00:24:17.262 [2024-11-25 12:22:18.087623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.262 [2024-11-25 12:22:18.087695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.262 [2024-11-25 12:22:18.087704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:17.262 [2024-11-25 12:22:18.087712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:24:17.262 [2024-11-25 12:22:18.087719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.262 [2024-11-25 12:22:18.087765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.262 [2024-11-25 12:22:18.087777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:17.262 [2024-11-25 12:22:18.087785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:17.262 [2024-11-25 12:22:18.087792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.262 [2024-11-25 12:22:18.087812] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:17.262 [2024-11-25 12:22:18.091099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.262 [2024-11-25 12:22:18.091126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:17.262 [2024-11-25 12:22:18.091135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.292 ms 00:24:17.262 [2024-11-25 12:22:18.091143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.262 [2024-11-25 12:22:18.091169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.262 [2024-11-25 12:22:18.091177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:17.262 [2024-11-25 12:22:18.091185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:17.262 [2024-11-25 12:22:18.091192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.262 [2024-11-25 12:22:18.091210] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:17.262 [2024-11-25 12:22:18.091229] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:17.262 [2024-11-25 12:22:18.091262] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:17.262 [2024-11-25 12:22:18.091276] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:17.262 [2024-11-25 12:22:18.091377] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:17.262 [2024-11-25 12:22:18.091388] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:17.262 [2024-11-25 12:22:18.091398] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:17.262 [2024-11-25 12:22:18.091407] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:17.262 [2024-11-25 12:22:18.091418] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:17.262 [2024-11-25 12:22:18.091426] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:17.262 [2024-11-25 12:22:18.091433] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:17.262 [2024-11-25 12:22:18.091440] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:17.262 [2024-11-25 12:22:18.091446] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:17.262 [2024-11-25 12:22:18.091453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.262 [2024-11-25 12:22:18.091460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:17.262 [2024-11-25 12:22:18.091468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.245 ms 00:24:17.262 [2024-11-25 12:22:18.091475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.262 [2024-11-25 12:22:18.091556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.262 [2024-11-25 12:22:18.091566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:17.262 [2024-11-25 12:22:18.091573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:24:17.262 [2024-11-25 12:22:18.091580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.262 [2024-11-25 12:22:18.091679] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:17.262 [2024-11-25 12:22:18.091689] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:17.262 [2024-11-25 12:22:18.091697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:17.262 [2024-11-25 12:22:18.091704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:17.262 [2024-11-25 12:22:18.091711] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:17.262 [2024-11-25 12:22:18.091718] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:17.262 [2024-11-25 12:22:18.091725] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:17.262 [2024-11-25 12:22:18.091732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:17.262 [2024-11-25 12:22:18.091739] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:17.262 [2024-11-25 12:22:18.091745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:17.262 [2024-11-25 12:22:18.091752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:17.262 [2024-11-25 12:22:18.091763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:17.262 [2024-11-25 12:22:18.091769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:17.262 [2024-11-25 12:22:18.091776] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:17.262 [2024-11-25 12:22:18.091782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:17.262 [2024-11-25 12:22:18.091788] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:17.262 [2024-11-25 12:22:18.091794] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:17.262 [2024-11-25 12:22:18.091802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:17.262 [2024-11-25 12:22:18.091809] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:17.262 [2024-11-25 12:22:18.091815] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:17.262 [2024-11-25 12:22:18.091821] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:17.262 [2024-11-25 12:22:18.091828] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:17.262 [2024-11-25 12:22:18.091834] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:17.262 [2024-11-25 12:22:18.091840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:17.262 [2024-11-25 12:22:18.091846] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:17.263 [2024-11-25 12:22:18.091852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:17.263 [2024-11-25 12:22:18.091859] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:17.263 [2024-11-25 12:22:18.091865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:17.263 [2024-11-25 12:22:18.091872] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:17.263 [2024-11-25 12:22:18.091879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:17.263 [2024-11-25 12:22:18.091885] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:17.263 [2024-11-25 12:22:18.091891] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:17.263 [2024-11-25 12:22:18.091898] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:17.263 [2024-11-25 12:22:18.091904] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:17.263 [2024-11-25 12:22:18.091910] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:17.263 [2024-11-25 12:22:18.091917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:17.263 [2024-11-25 12:22:18.091923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:17.263 [2024-11-25 12:22:18.091929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:17.263 [2024-11-25 12:22:18.091936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:17.263 [2024-11-25 12:22:18.091942] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:17.263 [2024-11-25 12:22:18.091977] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:17.263 [2024-11-25 12:22:18.091990] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:17.263 [2024-11-25 12:22:18.092001] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:17.263 [2024-11-25 12:22:18.092011] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:17.263 [2024-11-25 12:22:18.092019] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:17.263 [2024-11-25 12:22:18.092032] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:17.263 [2024-11-25 12:22:18.092042] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:17.263 [2024-11-25 12:22:18.092049] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:17.263 [2024-11-25 12:22:18.092056] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:17.263 [2024-11-25 12:22:18.092063] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:17.263 [2024-11-25 12:22:18.092070] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:17.263 [2024-11-25 12:22:18.092076] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:17.263 [2024-11-25 12:22:18.092082] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:17.263 [2024-11-25 12:22:18.092091] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:17.263 [2024-11-25 12:22:18.092100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:17.263 [2024-11-25 12:22:18.092108] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:17.263 [2024-11-25 12:22:18.092115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:17.263 [2024-11-25 12:22:18.092121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:17.263 [2024-11-25 12:22:18.092128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:17.263 [2024-11-25 12:22:18.092135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:17.263 [2024-11-25 12:22:18.092142] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:17.263 [2024-11-25 12:22:18.092149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:17.263 [2024-11-25 12:22:18.092155] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:17.263 [2024-11-25 12:22:18.092164] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:17.263 [2024-11-25 12:22:18.092174] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:17.263 [2024-11-25 12:22:18.092184] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:17.263 [2024-11-25 12:22:18.092197] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:17.263 [2024-11-25 12:22:18.092210] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:17.263 [2024-11-25 12:22:18.092218] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:17.263 [2024-11-25 12:22:18.092224] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:17.263 [2024-11-25 12:22:18.092233] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:17.263 [2024-11-25 12:22:18.092240] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:17.263 [2024-11-25 12:22:18.092247] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:17.263 [2024-11-25 12:22:18.092254] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:17.263 [2024-11-25 12:22:18.092261] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:17.263 [2024-11-25 12:22:18.092272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.263 [2024-11-25 12:22:18.092283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:17.263 [2024-11-25 12:22:18.092295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.660 ms 00:24:17.263 [2024-11-25 12:22:18.092306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.263 [2024-11-25 12:22:18.117863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.263 [2024-11-25 12:22:18.118045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:17.263 [2024-11-25 12:22:18.118064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.493 ms 00:24:17.263 [2024-11-25 12:22:18.118072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.263 [2024-11-25 12:22:18.118156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.263 [2024-11-25 12:22:18.118169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:17.263 [2024-11-25 12:22:18.118177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:24:17.263 [2024-11-25 12:22:18.118184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.263 [2024-11-25 12:22:18.155973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.263 [2024-11-25 12:22:18.156139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:17.263 [2024-11-25 12:22:18.156165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.734 ms 00:24:17.263 [2024-11-25 12:22:18.156183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.263 [2024-11-25 12:22:18.156231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.263 [2024-11-25 12:22:18.156240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:17.263 [2024-11-25 12:22:18.156249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:17.263 [2024-11-25 12:22:18.156256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.263 [2024-11-25 12:22:18.156596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.263 [2024-11-25 12:22:18.156611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:17.263 [2024-11-25 12:22:18.156620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:24:17.263 [2024-11-25 12:22:18.156628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.263 [2024-11-25 12:22:18.156748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.263 [2024-11-25 12:22:18.156756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:17.263 [2024-11-25 12:22:18.156764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:24:17.263 [2024-11-25 12:22:18.156771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.263 [2024-11-25 12:22:18.169707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.263 [2024-11-25 12:22:18.169835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:17.263 [2024-11-25 12:22:18.169904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.916 ms 00:24:17.263 [2024-11-25 12:22:18.169942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.263 [2024-11-25 12:22:18.182161] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:17.263 [2024-11-25 12:22:18.182308] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:17.263 [2024-11-25 12:22:18.182461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.263 [2024-11-25 12:22:18.182486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:17.263 [2024-11-25 12:22:18.182548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.352 ms 00:24:17.263 [2024-11-25 12:22:18.182615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.263 [2024-11-25 12:22:18.206820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.263 [2024-11-25 12:22:18.206979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:17.263 [2024-11-25 12:22:18.207060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.148 ms 00:24:17.263 [2024-11-25 12:22:18.207089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.263 [2024-11-25 12:22:18.218510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.263 [2024-11-25 12:22:18.218638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:17.263 [2024-11-25 12:22:18.218705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.365 ms 00:24:17.263 [2024-11-25 12:22:18.218764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.263 [2024-11-25 12:22:18.230122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.263 [2024-11-25 12:22:18.230242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:17.263 [2024-11-25 12:22:18.230306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.290 ms 00:24:17.263 [2024-11-25 12:22:18.230334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.263 [2024-11-25 12:22:18.230975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.264 [2024-11-25 12:22:18.231068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:17.264 [2024-11-25 12:22:18.231130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.538 ms 00:24:17.264 [2024-11-25 12:22:18.231156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.264 [2024-11-25 12:22:18.286144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.264 [2024-11-25 12:22:18.286361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:17.264 [2024-11-25 12:22:18.286435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.954 ms 00:24:17.264 [2024-11-25 12:22:18.286465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.264 [2024-11-25 12:22:18.297143] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:17.264 [2024-11-25 12:22:18.299849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.264 [2024-11-25 12:22:18.299975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:17.264 [2024-11-25 12:22:18.300038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.325 ms 00:24:17.264 [2024-11-25 12:22:18.300069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.264 [2024-11-25 12:22:18.300192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.264 [2024-11-25 12:22:18.300227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:17.264 [2024-11-25 12:22:18.300248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:24:17.264 [2024-11-25 12:22:18.300307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.264 [2024-11-25 12:22:18.300416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.264 [2024-11-25 12:22:18.300521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:17.264 [2024-11-25 12:22:18.300547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:24:17.264 [2024-11-25 12:22:18.300566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.264 [2024-11-25 12:22:18.300601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.264 [2024-11-25 12:22:18.300627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:17.264 [2024-11-25 12:22:18.300734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:17.264 [2024-11-25 12:22:18.300759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.264 [2024-11-25 12:22:18.300804] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:17.264 [2024-11-25 12:22:18.300871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.264 [2024-11-25 12:22:18.300899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:17.264 [2024-11-25 12:22:18.300918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:24:17.264 [2024-11-25 12:22:18.300937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.264 [2024-11-25 12:22:18.323958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.264 [2024-11-25 12:22:18.324100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:17.264 [2024-11-25 12:22:18.324172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.934 ms 00:24:17.264 [2024-11-25 12:22:18.324201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.264 [2024-11-25 12:22:18.324283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.264 [2024-11-25 12:22:18.324413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:17.264 [2024-11-25 12:22:18.324440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:24:17.264 [2024-11-25 12:22:18.324459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.264 [2024-11-25 12:22:18.325413] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 256.265 ms, result 0 00:24:18.641  [2024-11-25T12:22:20.655Z] Copying: 43/1024 [MB] (43 MBps) [2024-11-25T12:22:21.588Z] Copying: 86/1024 [MB] (43 MBps) [2024-11-25T12:22:22.522Z] Copying: 131/1024 [MB] (44 MBps) [2024-11-25T12:22:23.453Z] Copying: 176/1024 [MB] (45 MBps) [2024-11-25T12:22:24.385Z] Copying: 223/1024 [MB] (46 MBps) [2024-11-25T12:22:25.757Z] Copying: 268/1024 [MB] (44 MBps) [2024-11-25T12:22:26.699Z] Copying: 316/1024 [MB] (48 MBps) [2024-11-25T12:22:27.634Z] Copying: 360/1024 [MB] (44 MBps) [2024-11-25T12:22:28.566Z] Copying: 404/1024 [MB] (44 MBps) [2024-11-25T12:22:29.496Z] Copying: 449/1024 [MB] (44 MBps) [2024-11-25T12:22:30.427Z] Copying: 495/1024 [MB] (45 MBps) [2024-11-25T12:22:31.360Z] Copying: 541/1024 [MB] (46 MBps) [2024-11-25T12:22:32.732Z] Copying: 590/1024 [MB] (48 MBps) [2024-11-25T12:22:33.665Z] Copying: 638/1024 [MB] (47 MBps) [2024-11-25T12:22:34.597Z] Copying: 673/1024 [MB] (34 MBps) [2024-11-25T12:22:35.530Z] Copying: 718/1024 [MB] (45 MBps) [2024-11-25T12:22:36.477Z] Copying: 764/1024 [MB] (45 MBps) [2024-11-25T12:22:37.436Z] Copying: 809/1024 [MB] (44 MBps) [2024-11-25T12:22:38.369Z] Copying: 854/1024 [MB] (45 MBps) [2024-11-25T12:22:39.759Z] Copying: 900/1024 [MB] (45 MBps) [2024-11-25T12:22:40.691Z] Copying: 947/1024 [MB] (46 MBps) [2024-11-25T12:22:41.624Z] Copying: 993/1024 [MB] (46 MBps) [2024-11-25T12:22:42.189Z] Copying: 1023/1024 [MB] (29 MBps) [2024-11-25T12:22:42.189Z] Copying: 1024/1024 [MB] (average 43 MBps)[2024-11-25 12:22:42.051745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.109 [2024-11-25 12:22:42.051800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:41.109 [2024-11-25 12:22:42.051814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:41.109 [2024-11-25 12:22:42.051823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.109 [2024-11-25 12:22:42.056388] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:41.109 [2024-11-25 12:22:42.060031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.109 [2024-11-25 12:22:42.060069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:41.109 [2024-11-25 12:22:42.060083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.592 ms 00:24:41.109 [2024-11-25 12:22:42.060093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.109 [2024-11-25 12:22:42.072974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.109 [2024-11-25 12:22:42.073011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:41.109 [2024-11-25 12:22:42.073024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.935 ms 00:24:41.109 [2024-11-25 12:22:42.073032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.109 [2024-11-25 12:22:42.095304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.109 [2024-11-25 12:22:42.095346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:41.109 [2024-11-25 12:22:42.095358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.255 ms 00:24:41.109 [2024-11-25 12:22:42.095366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.109 [2024-11-25 12:22:42.101570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.109 [2024-11-25 12:22:42.101617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:41.109 [2024-11-25 12:22:42.101627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.175 ms 00:24:41.109 [2024-11-25 12:22:42.101635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.109 [2024-11-25 12:22:42.125319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.109 [2024-11-25 12:22:42.125470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:41.109 [2024-11-25 12:22:42.125486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.647 ms 00:24:41.109 [2024-11-25 12:22:42.125494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.109 [2024-11-25 12:22:42.138329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.109 [2024-11-25 12:22:42.138359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:41.109 [2024-11-25 12:22:42.138371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.805 ms 00:24:41.109 [2024-11-25 12:22:42.138379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.369 [2024-11-25 12:22:42.190808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.369 [2024-11-25 12:22:42.190863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:41.370 [2024-11-25 12:22:42.190876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.390 ms 00:24:41.370 [2024-11-25 12:22:42.190893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.370 [2024-11-25 12:22:42.214672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.370 [2024-11-25 12:22:42.214713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:41.370 [2024-11-25 12:22:42.214724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.763 ms 00:24:41.370 [2024-11-25 12:22:42.214732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.370 [2024-11-25 12:22:42.237548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.370 [2024-11-25 12:22:42.237587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:41.370 [2024-11-25 12:22:42.237598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.779 ms 00:24:41.370 [2024-11-25 12:22:42.237606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.370 [2024-11-25 12:22:42.260461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.370 [2024-11-25 12:22:42.260606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:41.370 [2024-11-25 12:22:42.260624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.821 ms 00:24:41.370 [2024-11-25 12:22:42.260632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.370 [2024-11-25 12:22:42.283506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.370 [2024-11-25 12:22:42.283635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:41.370 [2024-11-25 12:22:42.283652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.818 ms 00:24:41.370 [2024-11-25 12:22:42.283659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.370 [2024-11-25 12:22:42.283689] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:41.370 [2024-11-25 12:22:42.283703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 128512 / 261120 wr_cnt: 1 state: open 00:24:41.370 [2024-11-25 12:22:42.283713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:41.370 [2024-11-25 12:22:42.283721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:41.370 [2024-11-25 12:22:42.283729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:41.370 [2024-11-25 12:22:42.283736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:41.370 [2024-11-25 12:22:42.283743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:41.370 [2024-11-25 12:22:42.283751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:41.370 [2024-11-25 12:22:42.283759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:41.370 [2024-11-25 12:22:42.283766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:41.370 [2024-11-25 12:22:42.283774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:41.370 [2024-11-25 12:22:42.283781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:41.370 [2024-11-25 12:22:42.283788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:41.370 [2024-11-25 12:22:42.283796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:41.370 [2024-11-25 12:22:42.283803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:41.370 [2024-11-25 12:22:42.283811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:41.370 [2024-11-25 12:22:42.283818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:41.370 [2024-11-25 12:22:42.283825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:41.370 [2024-11-25 12:22:42.283832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:41.370 [2024-11-25 12:22:42.283840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:41.370 [2024-11-25 12:22:42.283848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:41.370 [2024-11-25 12:22:42.283855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:41.370 [2024-11-25 12:22:42.283863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:41.370 [2024-11-25 12:22:42.283870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:41.370 [2024-11-25 12:22:42.283877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:41.370 [2024-11-25 12:22:42.283884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:41.370 [2024-11-25 12:22:42.283892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:41.370 [2024-11-25 12:22:42.283899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:41.370 [2024-11-25 12:22:42.283907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:41.370 [2024-11-25 12:22:42.283915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:41.370 [2024-11-25 12:22:42.283922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:41.370 [2024-11-25 12:22:42.283929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:41.370 [2024-11-25 12:22:42.283937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:41.370 [2024-11-25 12:22:42.283944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:41.370 [2024-11-25 12:22:42.283968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:41.370 [2024-11-25 12:22:42.283976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:41.370 [2024-11-25 12:22:42.283984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:41.370 [2024-11-25 12:22:42.283992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:41.370 [2024-11-25 12:22:42.283999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:41.370 [2024-11-25 12:22:42.284007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:41.370 [2024-11-25 12:22:42.284015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:41.370 [2024-11-25 12:22:42.284022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:41.370 [2024-11-25 12:22:42.284030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:41.370 [2024-11-25 12:22:42.284037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:41.370 [2024-11-25 12:22:42.284045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:41.370 [2024-11-25 12:22:42.284053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:41.370 [2024-11-25 12:22:42.284060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:41.370 [2024-11-25 12:22:42.284068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:41.370 [2024-11-25 12:22:42.284075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:41.370 [2024-11-25 12:22:42.284083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:41.370 [2024-11-25 12:22:42.284090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:41.370 [2024-11-25 12:22:42.284098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:41.370 [2024-11-25 12:22:42.284105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:41.370 [2024-11-25 12:22:42.284113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:41.370 [2024-11-25 12:22:42.284120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:41.370 [2024-11-25 12:22:42.284127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:41.370 [2024-11-25 12:22:42.284135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:41.370 [2024-11-25 12:22:42.284143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:41.370 [2024-11-25 12:22:42.284150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:41.370 [2024-11-25 12:22:42.284157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:41.370 [2024-11-25 12:22:42.284165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:41.370 [2024-11-25 12:22:42.284173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:41.370 [2024-11-25 12:22:42.284180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:41.370 [2024-11-25 12:22:42.284187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:41.370 [2024-11-25 12:22:42.284194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:41.370 [2024-11-25 12:22:42.284202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:41.370 [2024-11-25 12:22:42.284209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:41.370 [2024-11-25 12:22:42.284217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:41.370 [2024-11-25 12:22:42.284224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:41.370 [2024-11-25 12:22:42.284232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:41.370 [2024-11-25 12:22:42.284240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:41.370 [2024-11-25 12:22:42.284248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:41.370 [2024-11-25 12:22:42.284256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:41.370 [2024-11-25 12:22:42.284264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:41.370 [2024-11-25 12:22:42.284271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:41.371 [2024-11-25 12:22:42.284279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:41.371 [2024-11-25 12:22:42.284286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:41.371 [2024-11-25 12:22:42.284294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:41.371 [2024-11-25 12:22:42.284301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:41.371 [2024-11-25 12:22:42.284309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:41.371 [2024-11-25 12:22:42.284316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:41.371 [2024-11-25 12:22:42.284323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:41.371 [2024-11-25 12:22:42.284331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:41.371 [2024-11-25 12:22:42.284338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:41.371 [2024-11-25 12:22:42.284345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:41.371 [2024-11-25 12:22:42.284353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:41.371 [2024-11-25 12:22:42.284360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:41.371 [2024-11-25 12:22:42.284368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:41.371 [2024-11-25 12:22:42.284375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:41.371 [2024-11-25 12:22:42.284383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:41.371 [2024-11-25 12:22:42.284390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:41.371 [2024-11-25 12:22:42.284398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:41.371 [2024-11-25 12:22:42.284405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:41.371 [2024-11-25 12:22:42.284416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:41.371 [2024-11-25 12:22:42.284424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:41.371 [2024-11-25 12:22:42.284431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:41.371 [2024-11-25 12:22:42.284439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:41.371 [2024-11-25 12:22:42.284446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:41.371 [2024-11-25 12:22:42.284454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:41.371 [2024-11-25 12:22:42.284461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:41.371 [2024-11-25 12:22:42.284469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:41.371 [2024-11-25 12:22:42.284484] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:41.371 [2024-11-25 12:22:42.284492] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: aed19bdd-3515-46f8-96d0-b2af87e28583 00:24:41.371 [2024-11-25 12:22:42.284500] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 128512 00:24:41.371 [2024-11-25 12:22:42.284511] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 129472 00:24:41.371 [2024-11-25 12:22:42.284525] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 128512 00:24:41.371 [2024-11-25 12:22:42.284533] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0075 00:24:41.371 [2024-11-25 12:22:42.284540] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:41.371 [2024-11-25 12:22:42.284548] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:41.371 [2024-11-25 12:22:42.284555] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:41.371 [2024-11-25 12:22:42.284562] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:41.371 [2024-11-25 12:22:42.284568] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:41.371 [2024-11-25 12:22:42.284575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.371 [2024-11-25 12:22:42.284582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:41.371 [2024-11-25 12:22:42.284590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.887 ms 00:24:41.371 [2024-11-25 12:22:42.284596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.371 [2024-11-25 12:22:42.297648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.371 [2024-11-25 12:22:42.297764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:41.371 [2024-11-25 12:22:42.297826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.034 ms 00:24:41.371 [2024-11-25 12:22:42.297855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.371 [2024-11-25 12:22:42.298289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.371 [2024-11-25 12:22:42.298379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:41.371 [2024-11-25 12:22:42.298440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.386 ms 00:24:41.371 [2024-11-25 12:22:42.298497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.371 [2024-11-25 12:22:42.331009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:41.371 [2024-11-25 12:22:42.331185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:41.371 [2024-11-25 12:22:42.331309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:41.371 [2024-11-25 12:22:42.331337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.371 [2024-11-25 12:22:42.331417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:41.371 [2024-11-25 12:22:42.331445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:41.371 [2024-11-25 12:22:42.331464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:41.371 [2024-11-25 12:22:42.331482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.371 [2024-11-25 12:22:42.331589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:41.371 [2024-11-25 12:22:42.331752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:41.371 [2024-11-25 12:22:42.331779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:41.371 [2024-11-25 12:22:42.331798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.371 [2024-11-25 12:22:42.331828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:41.371 [2024-11-25 12:22:42.331848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:41.371 [2024-11-25 12:22:42.331868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:41.371 [2024-11-25 12:22:42.331886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.371 [2024-11-25 12:22:42.407454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:41.371 [2024-11-25 12:22:42.407629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:41.371 [2024-11-25 12:22:42.407710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:41.371 [2024-11-25 12:22:42.407741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.632 [2024-11-25 12:22:42.469751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:41.632 [2024-11-25 12:22:42.469918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:41.632 [2024-11-25 12:22:42.470009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:41.632 [2024-11-25 12:22:42.470037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.632 [2024-11-25 12:22:42.470119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:41.632 [2024-11-25 12:22:42.470143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:41.632 [2024-11-25 12:22:42.470162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:41.632 [2024-11-25 12:22:42.470229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.632 [2024-11-25 12:22:42.470306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:41.632 [2024-11-25 12:22:42.470330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:41.632 [2024-11-25 12:22:42.470349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:41.632 [2024-11-25 12:22:42.470368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.632 [2024-11-25 12:22:42.470471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:41.632 [2024-11-25 12:22:42.470537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:41.632 [2024-11-25 12:22:42.470575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:41.632 [2024-11-25 12:22:42.470626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.632 [2024-11-25 12:22:42.470689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:41.632 [2024-11-25 12:22:42.470713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:41.632 [2024-11-25 12:22:42.470770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:41.632 [2024-11-25 12:22:42.470803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.632 [2024-11-25 12:22:42.470853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:41.632 [2024-11-25 12:22:42.470880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:41.632 [2024-11-25 12:22:42.470899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:41.632 [2024-11-25 12:22:42.470979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.632 [2024-11-25 12:22:42.471041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:41.632 [2024-11-25 12:22:42.471065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:41.632 [2024-11-25 12:22:42.471084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:41.632 [2024-11-25 12:22:42.471103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.632 [2024-11-25 12:22:42.471229] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 419.902 ms, result 0 00:24:43.583 00:24:43.584 00:24:43.584 12:22:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:24:45.499 12:22:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:45.499 [2024-11-25 12:22:46.521837] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:24:45.499 [2024-11-25 12:22:46.522133] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79550 ] 00:24:45.757 [2024-11-25 12:22:46.683973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:45.757 [2024-11-25 12:22:46.781529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:46.015 [2024-11-25 12:22:47.032708] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:46.015 [2024-11-25 12:22:47.032769] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:46.275 [2024-11-25 12:22:47.185532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.275 [2024-11-25 12:22:47.185719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:46.275 [2024-11-25 12:22:47.185747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:46.275 [2024-11-25 12:22:47.185756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.275 [2024-11-25 12:22:47.185809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.275 [2024-11-25 12:22:47.185820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:46.275 [2024-11-25 12:22:47.185830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:24:46.275 [2024-11-25 12:22:47.185837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.275 [2024-11-25 12:22:47.185857] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:46.275 [2024-11-25 12:22:47.186571] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:46.275 [2024-11-25 12:22:47.186593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.275 [2024-11-25 12:22:47.186601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:46.275 [2024-11-25 12:22:47.186609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.740 ms 00:24:46.275 [2024-11-25 12:22:47.186617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.275 [2024-11-25 12:22:47.187654] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:46.275 [2024-11-25 12:22:47.199513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.275 [2024-11-25 12:22:47.199545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:46.275 [2024-11-25 12:22:47.199558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.860 ms 00:24:46.275 [2024-11-25 12:22:47.199567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.275 [2024-11-25 12:22:47.199621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.275 [2024-11-25 12:22:47.199630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:46.275 [2024-11-25 12:22:47.199638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:24:46.275 [2024-11-25 12:22:47.199646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.275 [2024-11-25 12:22:47.204459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.275 [2024-11-25 12:22:47.204592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:46.275 [2024-11-25 12:22:47.204608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.755 ms 00:24:46.275 [2024-11-25 12:22:47.204616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.275 [2024-11-25 12:22:47.204693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.275 [2024-11-25 12:22:47.204702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:46.275 [2024-11-25 12:22:47.204711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:24:46.275 [2024-11-25 12:22:47.204718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.275 [2024-11-25 12:22:47.204758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.275 [2024-11-25 12:22:47.204767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:46.275 [2024-11-25 12:22:47.204775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:46.275 [2024-11-25 12:22:47.204782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.275 [2024-11-25 12:22:47.204803] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:46.275 [2024-11-25 12:22:47.208116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.275 [2024-11-25 12:22:47.208142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:46.275 [2024-11-25 12:22:47.208151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.318 ms 00:24:46.275 [2024-11-25 12:22:47.208160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.275 [2024-11-25 12:22:47.208188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.275 [2024-11-25 12:22:47.208196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:46.275 [2024-11-25 12:22:47.208204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:46.275 [2024-11-25 12:22:47.208211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.275 [2024-11-25 12:22:47.208230] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:46.275 [2024-11-25 12:22:47.208247] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:46.275 [2024-11-25 12:22:47.208280] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:46.276 [2024-11-25 12:22:47.208297] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:46.276 [2024-11-25 12:22:47.208398] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:46.276 [2024-11-25 12:22:47.208408] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:46.276 [2024-11-25 12:22:47.208419] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:46.276 [2024-11-25 12:22:47.208428] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:46.276 [2024-11-25 12:22:47.208437] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:46.276 [2024-11-25 12:22:47.208445] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:46.276 [2024-11-25 12:22:47.208452] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:46.276 [2024-11-25 12:22:47.208459] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:46.276 [2024-11-25 12:22:47.208465] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:46.276 [2024-11-25 12:22:47.208475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.276 [2024-11-25 12:22:47.208482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:46.276 [2024-11-25 12:22:47.208490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:24:46.276 [2024-11-25 12:22:47.208496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.276 [2024-11-25 12:22:47.208577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.276 [2024-11-25 12:22:47.208585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:46.276 [2024-11-25 12:22:47.208592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:24:46.276 [2024-11-25 12:22:47.208599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.276 [2024-11-25 12:22:47.208698] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:46.276 [2024-11-25 12:22:47.208709] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:46.276 [2024-11-25 12:22:47.208716] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:46.276 [2024-11-25 12:22:47.208724] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:46.276 [2024-11-25 12:22:47.208731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:46.276 [2024-11-25 12:22:47.208738] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:46.276 [2024-11-25 12:22:47.208745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:46.276 [2024-11-25 12:22:47.208752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:46.276 [2024-11-25 12:22:47.208759] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:46.276 [2024-11-25 12:22:47.208766] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:46.276 [2024-11-25 12:22:47.208773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:46.276 [2024-11-25 12:22:47.208779] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:46.276 [2024-11-25 12:22:47.208786] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:46.276 [2024-11-25 12:22:47.208793] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:46.276 [2024-11-25 12:22:47.208799] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:46.276 [2024-11-25 12:22:47.208810] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:46.276 [2024-11-25 12:22:47.208816] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:46.276 [2024-11-25 12:22:47.208823] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:46.276 [2024-11-25 12:22:47.208829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:46.276 [2024-11-25 12:22:47.208836] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:46.276 [2024-11-25 12:22:47.208843] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:46.276 [2024-11-25 12:22:47.208852] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:46.276 [2024-11-25 12:22:47.208859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:46.276 [2024-11-25 12:22:47.208866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:46.276 [2024-11-25 12:22:47.208872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:46.276 [2024-11-25 12:22:47.208879] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:46.276 [2024-11-25 12:22:47.208885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:46.276 [2024-11-25 12:22:47.208891] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:46.276 [2024-11-25 12:22:47.208897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:46.276 [2024-11-25 12:22:47.208904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:46.276 [2024-11-25 12:22:47.208911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:46.276 [2024-11-25 12:22:47.208917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:46.276 [2024-11-25 12:22:47.208923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:46.276 [2024-11-25 12:22:47.208929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:46.276 [2024-11-25 12:22:47.208936] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:46.276 [2024-11-25 12:22:47.208942] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:46.276 [2024-11-25 12:22:47.208967] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:46.276 [2024-11-25 12:22:47.208973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:46.276 [2024-11-25 12:22:47.208980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:46.276 [2024-11-25 12:22:47.208986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:46.276 [2024-11-25 12:22:47.208993] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:46.276 [2024-11-25 12:22:47.209000] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:46.276 [2024-11-25 12:22:47.209007] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:46.276 [2024-11-25 12:22:47.209013] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:46.276 [2024-11-25 12:22:47.209021] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:46.276 [2024-11-25 12:22:47.209028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:46.276 [2024-11-25 12:22:47.209035] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:46.276 [2024-11-25 12:22:47.209042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:46.276 [2024-11-25 12:22:47.209049] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:46.276 [2024-11-25 12:22:47.209055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:46.276 [2024-11-25 12:22:47.209062] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:46.276 [2024-11-25 12:22:47.209069] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:46.276 [2024-11-25 12:22:47.209076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:46.276 [2024-11-25 12:22:47.209085] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:46.276 [2024-11-25 12:22:47.209094] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:46.276 [2024-11-25 12:22:47.209102] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:46.276 [2024-11-25 12:22:47.209109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:46.276 [2024-11-25 12:22:47.209116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:46.276 [2024-11-25 12:22:47.209123] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:46.276 [2024-11-25 12:22:47.209130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:46.276 [2024-11-25 12:22:47.209136] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:46.276 [2024-11-25 12:22:47.209143] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:46.276 [2024-11-25 12:22:47.209150] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:46.276 [2024-11-25 12:22:47.209157] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:46.276 [2024-11-25 12:22:47.209164] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:46.276 [2024-11-25 12:22:47.209171] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:46.276 [2024-11-25 12:22:47.209178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:46.276 [2024-11-25 12:22:47.209184] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:46.276 [2024-11-25 12:22:47.209191] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:46.276 [2024-11-25 12:22:47.209198] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:46.276 [2024-11-25 12:22:47.209208] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:46.276 [2024-11-25 12:22:47.209216] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:46.276 [2024-11-25 12:22:47.209223] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:46.276 [2024-11-25 12:22:47.209230] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:46.276 [2024-11-25 12:22:47.209237] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:46.276 [2024-11-25 12:22:47.209244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.276 [2024-11-25 12:22:47.209251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:46.276 [2024-11-25 12:22:47.209258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.614 ms 00:24:46.276 [2024-11-25 12:22:47.209265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.277 [2024-11-25 12:22:47.234807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.277 [2024-11-25 12:22:47.234943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:46.277 [2024-11-25 12:22:47.234972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.489 ms 00:24:46.277 [2024-11-25 12:22:47.234980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.277 [2024-11-25 12:22:47.235066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.277 [2024-11-25 12:22:47.235074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:46.277 [2024-11-25 12:22:47.235084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:24:46.277 [2024-11-25 12:22:47.235091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.277 [2024-11-25 12:22:47.279728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.277 [2024-11-25 12:22:47.279768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:46.277 [2024-11-25 12:22:47.279781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.588 ms 00:24:46.277 [2024-11-25 12:22:47.279789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.277 [2024-11-25 12:22:47.279831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.277 [2024-11-25 12:22:47.279841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:46.277 [2024-11-25 12:22:47.279849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:46.277 [2024-11-25 12:22:47.279859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.277 [2024-11-25 12:22:47.280241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.277 [2024-11-25 12:22:47.280382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:46.277 [2024-11-25 12:22:47.280404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.318 ms 00:24:46.277 [2024-11-25 12:22:47.280413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.277 [2024-11-25 12:22:47.280536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.277 [2024-11-25 12:22:47.280544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:46.277 [2024-11-25 12:22:47.280552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:24:46.277 [2024-11-25 12:22:47.280564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.277 [2024-11-25 12:22:47.293514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.277 [2024-11-25 12:22:47.293547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:46.277 [2024-11-25 12:22:47.293560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.930 ms 00:24:46.277 [2024-11-25 12:22:47.293568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.277 [2024-11-25 12:22:47.305703] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:24:46.277 [2024-11-25 12:22:47.305734] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:46.277 [2024-11-25 12:22:47.305744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.277 [2024-11-25 12:22:47.305752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:46.277 [2024-11-25 12:22:47.305761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.087 ms 00:24:46.277 [2024-11-25 12:22:47.305768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.277 [2024-11-25 12:22:47.329782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.277 [2024-11-25 12:22:47.329816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:46.277 [2024-11-25 12:22:47.329826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.978 ms 00:24:46.277 [2024-11-25 12:22:47.329834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.277 [2024-11-25 12:22:47.341121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.277 [2024-11-25 12:22:47.341156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:46.277 [2024-11-25 12:22:47.341166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.254 ms 00:24:46.277 [2024-11-25 12:22:47.341173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.535 [2024-11-25 12:22:47.352322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.535 [2024-11-25 12:22:47.352352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:46.535 [2024-11-25 12:22:47.352364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.119 ms 00:24:46.535 [2024-11-25 12:22:47.352373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.535 [2024-11-25 12:22:47.353019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.535 [2024-11-25 12:22:47.353047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:46.535 [2024-11-25 12:22:47.353056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.564 ms 00:24:46.535 [2024-11-25 12:22:47.353067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.535 [2024-11-25 12:22:47.407184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.535 [2024-11-25 12:22:47.407364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:46.535 [2024-11-25 12:22:47.407393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.098 ms 00:24:46.535 [2024-11-25 12:22:47.407401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.535 [2024-11-25 12:22:47.417994] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:46.535 [2024-11-25 12:22:47.420454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.535 [2024-11-25 12:22:47.420486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:46.535 [2024-11-25 12:22:47.420499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.698 ms 00:24:46.535 [2024-11-25 12:22:47.420507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.535 [2024-11-25 12:22:47.420605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.535 [2024-11-25 12:22:47.420617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:46.535 [2024-11-25 12:22:47.420625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:46.535 [2024-11-25 12:22:47.420635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.535 [2024-11-25 12:22:47.422108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.535 [2024-11-25 12:22:47.422138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:46.535 [2024-11-25 12:22:47.422149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.434 ms 00:24:46.535 [2024-11-25 12:22:47.422156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.535 [2024-11-25 12:22:47.422180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.535 [2024-11-25 12:22:47.422188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:46.535 [2024-11-25 12:22:47.422196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:46.535 [2024-11-25 12:22:47.422204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.535 [2024-11-25 12:22:47.422236] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:46.535 [2024-11-25 12:22:47.422247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.535 [2024-11-25 12:22:47.422256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:46.535 [2024-11-25 12:22:47.422264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:46.535 [2024-11-25 12:22:47.422272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.535 [2024-11-25 12:22:47.445166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.535 [2024-11-25 12:22:47.445299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:46.535 [2024-11-25 12:22:47.445318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.878 ms 00:24:46.536 [2024-11-25 12:22:47.445331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.536 [2024-11-25 12:22:47.445396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.536 [2024-11-25 12:22:47.445405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:46.536 [2024-11-25 12:22:47.445413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:24:46.536 [2024-11-25 12:22:47.445421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.536 [2024-11-25 12:22:47.446325] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 260.352 ms, result 0 00:24:47.907  [2024-11-25T12:22:49.965Z] Copying: 960/1048576 [kB] (960 kBps) [2024-11-25T12:22:50.898Z] Copying: 11/1024 [MB] (10 MBps) [2024-11-25T12:22:51.832Z] Copying: 64/1024 [MB] (53 MBps) [2024-11-25T12:22:52.766Z] Copying: 118/1024 [MB] (53 MBps) [2024-11-25T12:22:53.699Z] Copying: 173/1024 [MB] (55 MBps) [2024-11-25T12:22:54.633Z] Copying: 232/1024 [MB] (59 MBps) [2024-11-25T12:22:56.019Z] Copying: 286/1024 [MB] (53 MBps) [2024-11-25T12:22:56.985Z] Copying: 339/1024 [MB] (53 MBps) [2024-11-25T12:22:57.940Z] Copying: 393/1024 [MB] (53 MBps) [2024-11-25T12:22:58.873Z] Copying: 447/1024 [MB] (54 MBps) [2024-11-25T12:22:59.805Z] Copying: 501/1024 [MB] (53 MBps) [2024-11-25T12:23:00.738Z] Copying: 556/1024 [MB] (55 MBps) [2024-11-25T12:23:01.673Z] Copying: 607/1024 [MB] (51 MBps) [2024-11-25T12:23:03.044Z] Copying: 662/1024 [MB] (55 MBps) [2024-11-25T12:23:03.667Z] Copying: 716/1024 [MB] (53 MBps) [2024-11-25T12:23:05.041Z] Copying: 768/1024 [MB] (52 MBps) [2024-11-25T12:23:05.975Z] Copying: 819/1024 [MB] (50 MBps) [2024-11-25T12:23:06.908Z] Copying: 873/1024 [MB] (54 MBps) [2024-11-25T12:23:07.839Z] Copying: 927/1024 [MB] (53 MBps) [2024-11-25T12:23:08.774Z] Copying: 981/1024 [MB] (54 MBps) [2024-11-25T12:23:08.774Z] Copying: 1024/1024 [MB] (average 49 MBps)[2024-11-25 12:23:08.693350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.694 [2024-11-25 12:23:08.693423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:07.694 [2024-11-25 12:23:08.693453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:07.695 [2024-11-25 12:23:08.693465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.695 [2024-11-25 12:23:08.693494] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:07.695 [2024-11-25 12:23:08.703436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.695 [2024-11-25 12:23:08.703507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:07.695 [2024-11-25 12:23:08.703535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.918 ms 00:25:07.695 [2024-11-25 12:23:08.703555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.695 [2024-11-25 12:23:08.704189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.695 [2024-11-25 12:23:08.704229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:07.695 [2024-11-25 12:23:08.704260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.586 ms 00:25:07.695 [2024-11-25 12:23:08.704280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.695 [2024-11-25 12:23:08.714475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.695 [2024-11-25 12:23:08.714609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:07.695 [2024-11-25 12:23:08.714630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.161 ms 00:25:07.695 [2024-11-25 12:23:08.714638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.695 [2024-11-25 12:23:08.720850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.695 [2024-11-25 12:23:08.720876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:07.695 [2024-11-25 12:23:08.720885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.183 ms 00:25:07.695 [2024-11-25 12:23:08.720897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.695 [2024-11-25 12:23:08.744232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.695 [2024-11-25 12:23:08.744399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:07.695 [2024-11-25 12:23:08.744415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.300 ms 00:25:07.695 [2024-11-25 12:23:08.744423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.695 [2024-11-25 12:23:08.758272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.695 [2024-11-25 12:23:08.758302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:07.695 [2024-11-25 12:23:08.758313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.821 ms 00:25:07.695 [2024-11-25 12:23:08.758322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.695 [2024-11-25 12:23:08.760178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.695 [2024-11-25 12:23:08.760205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:07.695 [2024-11-25 12:23:08.760214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.834 ms 00:25:07.695 [2024-11-25 12:23:08.760222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.955 [2024-11-25 12:23:08.783305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.955 [2024-11-25 12:23:08.783420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:07.955 [2024-11-25 12:23:08.783435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.065 ms 00:25:07.955 [2024-11-25 12:23:08.783442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.955 [2024-11-25 12:23:08.805537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.955 [2024-11-25 12:23:08.805565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:07.955 [2024-11-25 12:23:08.805582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.069 ms 00:25:07.955 [2024-11-25 12:23:08.805588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.955 [2024-11-25 12:23:08.827852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.955 [2024-11-25 12:23:08.827893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:07.955 [2024-11-25 12:23:08.827903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.232 ms 00:25:07.955 [2024-11-25 12:23:08.827912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.955 [2024-11-25 12:23:08.849850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.955 [2024-11-25 12:23:08.849980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:07.955 [2024-11-25 12:23:08.849995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.875 ms 00:25:07.955 [2024-11-25 12:23:08.850002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.955 [2024-11-25 12:23:08.850028] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:07.955 [2024-11-25 12:23:08.850041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:07.955 [2024-11-25 12:23:08.850052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:25:07.955 [2024-11-25 12:23:08.850061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:07.955 [2024-11-25 12:23:08.850069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:07.955 [2024-11-25 12:23:08.850076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:07.955 [2024-11-25 12:23:08.850083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:07.955 [2024-11-25 12:23:08.850091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:07.955 [2024-11-25 12:23:08.850098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:07.955 [2024-11-25 12:23:08.850105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:07.955 [2024-11-25 12:23:08.850113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:07.955 [2024-11-25 12:23:08.850120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:07.955 [2024-11-25 12:23:08.850127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:07.955 [2024-11-25 12:23:08.850134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:07.955 [2024-11-25 12:23:08.850142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:07.955 [2024-11-25 12:23:08.850149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:07.955 [2024-11-25 12:23:08.850156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:07.955 [2024-11-25 12:23:08.850164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:07.955 [2024-11-25 12:23:08.850171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:07.955 [2024-11-25 12:23:08.850178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:07.955 [2024-11-25 12:23:08.850185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:07.955 [2024-11-25 12:23:08.850192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:07.955 [2024-11-25 12:23:08.850200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:07.955 [2024-11-25 12:23:08.850208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:07.955 [2024-11-25 12:23:08.850215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:07.955 [2024-11-25 12:23:08.850222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:07.955 [2024-11-25 12:23:08.850229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:07.955 [2024-11-25 12:23:08.850236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:07.955 [2024-11-25 12:23:08.850245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:07.955 [2024-11-25 12:23:08.850252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:07.955 [2024-11-25 12:23:08.850261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:07.955 [2024-11-25 12:23:08.850268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:07.955 [2024-11-25 12:23:08.850276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:07.955 [2024-11-25 12:23:08.850283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:07.955 [2024-11-25 12:23:08.850290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:07.955 [2024-11-25 12:23:08.850297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:07.955 [2024-11-25 12:23:08.850304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:07.955 [2024-11-25 12:23:08.850312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:07.955 [2024-11-25 12:23:08.850319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:07.956 [2024-11-25 12:23:08.850326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:07.956 [2024-11-25 12:23:08.850333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:07.956 [2024-11-25 12:23:08.850341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:07.956 [2024-11-25 12:23:08.850348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:07.956 [2024-11-25 12:23:08.850355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:07.956 [2024-11-25 12:23:08.850363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:07.956 [2024-11-25 12:23:08.850370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:07.956 [2024-11-25 12:23:08.850377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:07.956 [2024-11-25 12:23:08.850384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:07.956 [2024-11-25 12:23:08.850391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:07.956 [2024-11-25 12:23:08.850398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:07.956 [2024-11-25 12:23:08.850406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:07.956 [2024-11-25 12:23:08.850413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:07.956 [2024-11-25 12:23:08.850420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:07.956 [2024-11-25 12:23:08.850428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:07.956 [2024-11-25 12:23:08.850435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:07.956 [2024-11-25 12:23:08.850442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:07.956 [2024-11-25 12:23:08.850449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:07.956 [2024-11-25 12:23:08.850457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:07.956 [2024-11-25 12:23:08.850464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:07.956 [2024-11-25 12:23:08.850471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:07.956 [2024-11-25 12:23:08.850478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:07.956 [2024-11-25 12:23:08.850485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:07.956 [2024-11-25 12:23:08.850493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:07.956 [2024-11-25 12:23:08.850500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:07.956 [2024-11-25 12:23:08.850507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:07.956 [2024-11-25 12:23:08.850515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:07.956 [2024-11-25 12:23:08.850522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:07.956 [2024-11-25 12:23:08.850529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:07.956 [2024-11-25 12:23:08.850536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:07.956 [2024-11-25 12:23:08.850543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:07.956 [2024-11-25 12:23:08.850550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:07.956 [2024-11-25 12:23:08.850557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:07.956 [2024-11-25 12:23:08.850565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:07.956 [2024-11-25 12:23:08.850572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:07.956 [2024-11-25 12:23:08.850579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:07.956 [2024-11-25 12:23:08.850586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:07.956 [2024-11-25 12:23:08.850593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:07.956 [2024-11-25 12:23:08.850600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:07.956 [2024-11-25 12:23:08.850607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:07.956 [2024-11-25 12:23:08.850614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:07.956 [2024-11-25 12:23:08.850621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:07.956 [2024-11-25 12:23:08.850628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:07.956 [2024-11-25 12:23:08.850635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:07.956 [2024-11-25 12:23:08.850642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:07.956 [2024-11-25 12:23:08.850649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:07.956 [2024-11-25 12:23:08.850657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:07.956 [2024-11-25 12:23:08.850665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:07.956 [2024-11-25 12:23:08.850672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:07.956 [2024-11-25 12:23:08.850679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:07.956 [2024-11-25 12:23:08.850686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:07.956 [2024-11-25 12:23:08.850693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:07.956 [2024-11-25 12:23:08.850700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:07.956 [2024-11-25 12:23:08.850707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:07.956 [2024-11-25 12:23:08.850714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:07.956 [2024-11-25 12:23:08.850722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:07.956 [2024-11-25 12:23:08.850729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:07.956 [2024-11-25 12:23:08.850737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:07.956 [2024-11-25 12:23:08.850744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:07.956 [2024-11-25 12:23:08.850751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:07.956 [2024-11-25 12:23:08.850758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:07.956 [2024-11-25 12:23:08.850766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:07.956 [2024-11-25 12:23:08.850781] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:07.956 [2024-11-25 12:23:08.850788] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: aed19bdd-3515-46f8-96d0-b2af87e28583 00:25:07.956 [2024-11-25 12:23:08.850796] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:25:07.956 [2024-11-25 12:23:08.850802] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 136128 00:25:07.956 [2024-11-25 12:23:08.850809] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 134144 00:25:07.956 [2024-11-25 12:23:08.850820] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0148 00:25:07.956 [2024-11-25 12:23:08.850827] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:07.956 [2024-11-25 12:23:08.850834] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:07.956 [2024-11-25 12:23:08.850841] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:07.956 [2024-11-25 12:23:08.850853] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:07.956 [2024-11-25 12:23:08.850859] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:07.956 [2024-11-25 12:23:08.850866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.956 [2024-11-25 12:23:08.850874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:07.956 [2024-11-25 12:23:08.850881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.839 ms 00:25:07.956 [2024-11-25 12:23:08.850888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.957 [2024-11-25 12:23:08.863110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.957 [2024-11-25 12:23:08.863140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:07.957 [2024-11-25 12:23:08.863150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.207 ms 00:25:07.957 [2024-11-25 12:23:08.863157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.957 [2024-11-25 12:23:08.863484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.957 [2024-11-25 12:23:08.863496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:07.957 [2024-11-25 12:23:08.863504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.311 ms 00:25:07.957 [2024-11-25 12:23:08.863510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.957 [2024-11-25 12:23:08.895825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:07.957 [2024-11-25 12:23:08.895857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:07.957 [2024-11-25 12:23:08.895866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:07.957 [2024-11-25 12:23:08.895874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.957 [2024-11-25 12:23:08.895927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:07.957 [2024-11-25 12:23:08.895935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:07.957 [2024-11-25 12:23:08.895943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:07.957 [2024-11-25 12:23:08.895965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.957 [2024-11-25 12:23:08.896013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:07.957 [2024-11-25 12:23:08.896026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:07.957 [2024-11-25 12:23:08.896034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:07.957 [2024-11-25 12:23:08.896041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.957 [2024-11-25 12:23:08.896055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:07.957 [2024-11-25 12:23:08.896062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:07.957 [2024-11-25 12:23:08.896070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:07.957 [2024-11-25 12:23:08.896077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.957 [2024-11-25 12:23:08.972443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:07.957 [2024-11-25 12:23:08.972486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:07.957 [2024-11-25 12:23:08.972497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:07.957 [2024-11-25 12:23:08.972504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.216 [2024-11-25 12:23:09.035161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:08.216 [2024-11-25 12:23:09.035205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:08.216 [2024-11-25 12:23:09.035216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:08.216 [2024-11-25 12:23:09.035224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.216 [2024-11-25 12:23:09.035286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:08.216 [2024-11-25 12:23:09.035295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:08.216 [2024-11-25 12:23:09.035308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:08.216 [2024-11-25 12:23:09.035315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.216 [2024-11-25 12:23:09.035347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:08.216 [2024-11-25 12:23:09.035355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:08.216 [2024-11-25 12:23:09.035363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:08.216 [2024-11-25 12:23:09.035370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.216 [2024-11-25 12:23:09.035454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:08.216 [2024-11-25 12:23:09.035464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:08.216 [2024-11-25 12:23:09.035472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:08.216 [2024-11-25 12:23:09.035481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.216 [2024-11-25 12:23:09.035508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:08.216 [2024-11-25 12:23:09.035517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:08.216 [2024-11-25 12:23:09.035525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:08.216 [2024-11-25 12:23:09.035532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.216 [2024-11-25 12:23:09.035563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:08.216 [2024-11-25 12:23:09.035571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:08.216 [2024-11-25 12:23:09.035579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:08.216 [2024-11-25 12:23:09.035589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.216 [2024-11-25 12:23:09.035624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:08.216 [2024-11-25 12:23:09.035633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:08.216 [2024-11-25 12:23:09.035641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:08.216 [2024-11-25 12:23:09.035648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.216 [2024-11-25 12:23:09.035751] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 342.382 ms, result 0 00:25:10.779 00:25:10.779 00:25:10.779 12:23:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:12.708 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:12.708 12:23:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:12.709 [2024-11-25 12:23:13.555352] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:25:12.709 [2024-11-25 12:23:13.555474] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79816 ] 00:25:12.709 [2024-11-25 12:23:13.715750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:12.966 [2024-11-25 12:23:13.814150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:13.224 [2024-11-25 12:23:14.091153] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:13.224 [2024-11-25 12:23:14.091234] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:13.224 [2024-11-25 12:23:14.246682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.224 [2024-11-25 12:23:14.246740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:13.224 [2024-11-25 12:23:14.246758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:13.224 [2024-11-25 12:23:14.246766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.225 [2024-11-25 12:23:14.246814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.225 [2024-11-25 12:23:14.246823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:13.225 [2024-11-25 12:23:14.246834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:25:13.225 [2024-11-25 12:23:14.246841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.225 [2024-11-25 12:23:14.246860] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:13.225 [2024-11-25 12:23:14.247552] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:13.225 [2024-11-25 12:23:14.247573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.225 [2024-11-25 12:23:14.247581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:13.225 [2024-11-25 12:23:14.247589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.718 ms 00:25:13.225 [2024-11-25 12:23:14.247597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.225 [2024-11-25 12:23:14.248725] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:13.225 [2024-11-25 12:23:14.260648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.225 [2024-11-25 12:23:14.260683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:13.225 [2024-11-25 12:23:14.260695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.924 ms 00:25:13.225 [2024-11-25 12:23:14.260703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.225 [2024-11-25 12:23:14.260759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.225 [2024-11-25 12:23:14.260769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:13.225 [2024-11-25 12:23:14.260777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:25:13.225 [2024-11-25 12:23:14.260784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.225 [2024-11-25 12:23:14.265763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.225 [2024-11-25 12:23:14.265793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:13.225 [2024-11-25 12:23:14.265802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.920 ms 00:25:13.225 [2024-11-25 12:23:14.265810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.225 [2024-11-25 12:23:14.265886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.225 [2024-11-25 12:23:14.265896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:13.225 [2024-11-25 12:23:14.265904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:25:13.225 [2024-11-25 12:23:14.265911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.225 [2024-11-25 12:23:14.265960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.225 [2024-11-25 12:23:14.265970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:13.225 [2024-11-25 12:23:14.265978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:25:13.225 [2024-11-25 12:23:14.265985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.225 [2024-11-25 12:23:14.266006] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:13.225 [2024-11-25 12:23:14.269367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.225 [2024-11-25 12:23:14.269393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:13.225 [2024-11-25 12:23:14.269402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.366 ms 00:25:13.225 [2024-11-25 12:23:14.269412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.225 [2024-11-25 12:23:14.269438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.225 [2024-11-25 12:23:14.269447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:13.225 [2024-11-25 12:23:14.269455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:13.225 [2024-11-25 12:23:14.269462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.225 [2024-11-25 12:23:14.269482] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:13.225 [2024-11-25 12:23:14.269499] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:13.225 [2024-11-25 12:23:14.269542] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:13.225 [2024-11-25 12:23:14.269559] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:13.225 [2024-11-25 12:23:14.269660] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:13.225 [2024-11-25 12:23:14.269670] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:13.225 [2024-11-25 12:23:14.269680] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:13.225 [2024-11-25 12:23:14.269690] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:13.225 [2024-11-25 12:23:14.269698] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:13.225 [2024-11-25 12:23:14.269707] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:13.225 [2024-11-25 12:23:14.269714] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:13.225 [2024-11-25 12:23:14.269721] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:13.225 [2024-11-25 12:23:14.269728] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:13.225 [2024-11-25 12:23:14.269739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.225 [2024-11-25 12:23:14.269746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:13.225 [2024-11-25 12:23:14.269754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:25:13.225 [2024-11-25 12:23:14.269761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.225 [2024-11-25 12:23:14.269844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.225 [2024-11-25 12:23:14.269851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:13.225 [2024-11-25 12:23:14.269858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:25:13.225 [2024-11-25 12:23:14.269865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.225 [2024-11-25 12:23:14.269995] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:13.225 [2024-11-25 12:23:14.270009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:13.225 [2024-11-25 12:23:14.270017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:13.225 [2024-11-25 12:23:14.270024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:13.225 [2024-11-25 12:23:14.270031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:13.225 [2024-11-25 12:23:14.270038] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:13.225 [2024-11-25 12:23:14.270045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:13.225 [2024-11-25 12:23:14.270052] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:13.225 [2024-11-25 12:23:14.270059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:13.225 [2024-11-25 12:23:14.270066] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:13.225 [2024-11-25 12:23:14.270073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:13.225 [2024-11-25 12:23:14.270079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:13.225 [2024-11-25 12:23:14.270085] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:13.225 [2024-11-25 12:23:14.270092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:13.225 [2024-11-25 12:23:14.270099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:13.225 [2024-11-25 12:23:14.270111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:13.225 [2024-11-25 12:23:14.270117] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:13.225 [2024-11-25 12:23:14.270124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:13.225 [2024-11-25 12:23:14.270130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:13.225 [2024-11-25 12:23:14.270136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:13.225 [2024-11-25 12:23:14.270143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:13.225 [2024-11-25 12:23:14.270149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:13.225 [2024-11-25 12:23:14.270155] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:13.225 [2024-11-25 12:23:14.270162] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:13.225 [2024-11-25 12:23:14.270168] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:13.225 [2024-11-25 12:23:14.270174] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:13.225 [2024-11-25 12:23:14.270181] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:13.225 [2024-11-25 12:23:14.270187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:13.225 [2024-11-25 12:23:14.270193] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:13.226 [2024-11-25 12:23:14.270200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:13.226 [2024-11-25 12:23:14.270208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:13.226 [2024-11-25 12:23:14.270214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:13.226 [2024-11-25 12:23:14.270221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:13.226 [2024-11-25 12:23:14.270227] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:13.226 [2024-11-25 12:23:14.270234] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:13.226 [2024-11-25 12:23:14.270240] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:13.226 [2024-11-25 12:23:14.270246] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:13.226 [2024-11-25 12:23:14.270252] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:13.226 [2024-11-25 12:23:14.270259] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:13.226 [2024-11-25 12:23:14.270265] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:13.226 [2024-11-25 12:23:14.270271] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:13.226 [2024-11-25 12:23:14.270278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:13.226 [2024-11-25 12:23:14.270285] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:13.226 [2024-11-25 12:23:14.270291] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:13.226 [2024-11-25 12:23:14.270298] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:13.226 [2024-11-25 12:23:14.270307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:13.226 [2024-11-25 12:23:14.270314] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:13.226 [2024-11-25 12:23:14.270321] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:13.226 [2024-11-25 12:23:14.270328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:13.226 [2024-11-25 12:23:14.270334] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:13.226 [2024-11-25 12:23:14.270341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:13.226 [2024-11-25 12:23:14.270347] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:13.226 [2024-11-25 12:23:14.270354] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:13.226 [2024-11-25 12:23:14.270361] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:13.226 [2024-11-25 12:23:14.270370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:13.226 [2024-11-25 12:23:14.270378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:13.226 [2024-11-25 12:23:14.270385] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:13.226 [2024-11-25 12:23:14.270392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:13.226 [2024-11-25 12:23:14.270399] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:13.226 [2024-11-25 12:23:14.270406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:13.226 [2024-11-25 12:23:14.270412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:13.226 [2024-11-25 12:23:14.270419] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:13.226 [2024-11-25 12:23:14.270426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:13.226 [2024-11-25 12:23:14.270433] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:13.226 [2024-11-25 12:23:14.270440] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:13.226 [2024-11-25 12:23:14.270447] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:13.226 [2024-11-25 12:23:14.270454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:13.226 [2024-11-25 12:23:14.270460] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:13.226 [2024-11-25 12:23:14.270468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:13.226 [2024-11-25 12:23:14.270474] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:13.226 [2024-11-25 12:23:14.270484] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:13.226 [2024-11-25 12:23:14.270492] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:13.226 [2024-11-25 12:23:14.270500] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:13.226 [2024-11-25 12:23:14.270508] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:13.226 [2024-11-25 12:23:14.270515] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:13.226 [2024-11-25 12:23:14.270522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.226 [2024-11-25 12:23:14.270529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:13.226 [2024-11-25 12:23:14.270537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.612 ms 00:25:13.226 [2024-11-25 12:23:14.270544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.226 [2024-11-25 12:23:14.296265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.226 [2024-11-25 12:23:14.296433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:13.226 [2024-11-25 12:23:14.296451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.680 ms 00:25:13.226 [2024-11-25 12:23:14.296459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.226 [2024-11-25 12:23:14.296549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.226 [2024-11-25 12:23:14.296557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:13.226 [2024-11-25 12:23:14.296565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:25:13.226 [2024-11-25 12:23:14.296572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.484 [2024-11-25 12:23:14.340284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.484 [2024-11-25 12:23:14.340438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:13.484 [2024-11-25 12:23:14.340456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.657 ms 00:25:13.484 [2024-11-25 12:23:14.340465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.484 [2024-11-25 12:23:14.340515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.484 [2024-11-25 12:23:14.340524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:13.484 [2024-11-25 12:23:14.340532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:13.484 [2024-11-25 12:23:14.340544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.484 [2024-11-25 12:23:14.340903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.484 [2024-11-25 12:23:14.340920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:13.484 [2024-11-25 12:23:14.340929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:25:13.484 [2024-11-25 12:23:14.340937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.484 [2024-11-25 12:23:14.341089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.484 [2024-11-25 12:23:14.341101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:13.484 [2024-11-25 12:23:14.341109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:25:13.484 [2024-11-25 12:23:14.341121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.484 [2024-11-25 12:23:14.354041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.485 [2024-11-25 12:23:14.354072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:13.485 [2024-11-25 12:23:14.354085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.903 ms 00:25:13.485 [2024-11-25 12:23:14.354092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.485 [2024-11-25 12:23:14.366413] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:13.485 [2024-11-25 12:23:14.366448] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:13.485 [2024-11-25 12:23:14.366460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.485 [2024-11-25 12:23:14.366468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:13.485 [2024-11-25 12:23:14.366477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.274 ms 00:25:13.485 [2024-11-25 12:23:14.366484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.485 [2024-11-25 12:23:14.390499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.485 [2024-11-25 12:23:14.390543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:13.485 [2024-11-25 12:23:14.390555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.976 ms 00:25:13.485 [2024-11-25 12:23:14.390564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.485 [2024-11-25 12:23:14.402290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.485 [2024-11-25 12:23:14.402320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:13.485 [2024-11-25 12:23:14.402330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.678 ms 00:25:13.485 [2024-11-25 12:23:14.402338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.485 [2024-11-25 12:23:14.413175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.485 [2024-11-25 12:23:14.413204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:13.485 [2024-11-25 12:23:14.413215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.803 ms 00:25:13.485 [2024-11-25 12:23:14.413222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.485 [2024-11-25 12:23:14.413837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.485 [2024-11-25 12:23:14.413861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:13.485 [2024-11-25 12:23:14.413870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.532 ms 00:25:13.485 [2024-11-25 12:23:14.413880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.485 [2024-11-25 12:23:14.468552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.485 [2024-11-25 12:23:14.468601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:13.485 [2024-11-25 12:23:14.468619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.654 ms 00:25:13.485 [2024-11-25 12:23:14.468627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.485 [2024-11-25 12:23:14.479253] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:13.485 [2024-11-25 12:23:14.481993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.485 [2024-11-25 12:23:14.482023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:13.485 [2024-11-25 12:23:14.482035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.317 ms 00:25:13.485 [2024-11-25 12:23:14.482045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.485 [2024-11-25 12:23:14.482146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.485 [2024-11-25 12:23:14.482156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:13.485 [2024-11-25 12:23:14.482165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:13.485 [2024-11-25 12:23:14.482175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.485 [2024-11-25 12:23:14.482719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.485 [2024-11-25 12:23:14.482831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:13.485 [2024-11-25 12:23:14.482846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.507 ms 00:25:13.485 [2024-11-25 12:23:14.482853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.485 [2024-11-25 12:23:14.482879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.485 [2024-11-25 12:23:14.482886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:13.485 [2024-11-25 12:23:14.482894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:13.485 [2024-11-25 12:23:14.482901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.485 [2024-11-25 12:23:14.482935] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:13.485 [2024-11-25 12:23:14.482963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.485 [2024-11-25 12:23:14.482971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:13.485 [2024-11-25 12:23:14.482979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:25:13.485 [2024-11-25 12:23:14.482987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.485 [2024-11-25 12:23:14.506028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.485 [2024-11-25 12:23:14.506064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:13.485 [2024-11-25 12:23:14.506076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.023 ms 00:25:13.485 [2024-11-25 12:23:14.506088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.485 [2024-11-25 12:23:14.506157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.485 [2024-11-25 12:23:14.506166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:13.485 [2024-11-25 12:23:14.506174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:25:13.485 [2024-11-25 12:23:14.506181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.485 [2024-11-25 12:23:14.507144] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 260.038 ms, result 0 00:25:14.858  [2024-11-25T12:23:16.871Z] Copying: 45/1024 [MB] (45 MBps) [2024-11-25T12:23:17.806Z] Copying: 98/1024 [MB] (52 MBps) [2024-11-25T12:23:18.737Z] Copying: 147/1024 [MB] (49 MBps) [2024-11-25T12:23:19.685Z] Copying: 194/1024 [MB] (46 MBps) [2024-11-25T12:23:21.059Z] Copying: 242/1024 [MB] (48 MBps) [2024-11-25T12:23:21.990Z] Copying: 291/1024 [MB] (48 MBps) [2024-11-25T12:23:22.922Z] Copying: 341/1024 [MB] (49 MBps) [2024-11-25T12:23:23.857Z] Copying: 390/1024 [MB] (48 MBps) [2024-11-25T12:23:24.792Z] Copying: 438/1024 [MB] (48 MBps) [2024-11-25T12:23:25.726Z] Copying: 487/1024 [MB] (48 MBps) [2024-11-25T12:23:27.100Z] Copying: 538/1024 [MB] (50 MBps) [2024-11-25T12:23:28.035Z] Copying: 586/1024 [MB] (48 MBps) [2024-11-25T12:23:28.970Z] Copying: 634/1024 [MB] (48 MBps) [2024-11-25T12:23:29.902Z] Copying: 682/1024 [MB] (47 MBps) [2024-11-25T12:23:30.835Z] Copying: 731/1024 [MB] (49 MBps) [2024-11-25T12:23:31.769Z] Copying: 780/1024 [MB] (48 MBps) [2024-11-25T12:23:32.703Z] Copying: 829/1024 [MB] (49 MBps) [2024-11-25T12:23:34.076Z] Copying: 878/1024 [MB] (49 MBps) [2024-11-25T12:23:35.007Z] Copying: 923/1024 [MB] (45 MBps) [2024-11-25T12:23:35.941Z] Copying: 971/1024 [MB] (47 MBps) [2024-11-25T12:23:35.941Z] Copying: 1020/1024 [MB] (49 MBps) [2024-11-25T12:23:35.941Z] Copying: 1024/1024 [MB] (average 48 MBps)[2024-11-25 12:23:35.851035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.861 [2024-11-25 12:23:35.851265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:34.861 [2024-11-25 12:23:35.851365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:34.861 [2024-11-25 12:23:35.851389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.861 [2024-11-25 12:23:35.851469] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:34.861 [2024-11-25 12:23:35.854231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.861 [2024-11-25 12:23:35.854355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:34.861 [2024-11-25 12:23:35.854437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.721 ms 00:25:34.861 [2024-11-25 12:23:35.854466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.861 [2024-11-25 12:23:35.854781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.861 [2024-11-25 12:23:35.854874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:34.861 [2024-11-25 12:23:35.854964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.245 ms 00:25:34.861 [2024-11-25 12:23:35.855030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.861 [2024-11-25 12:23:35.859526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.861 [2024-11-25 12:23:35.859621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:34.861 [2024-11-25 12:23:35.859683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.459 ms 00:25:34.861 [2024-11-25 12:23:35.859787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.861 [2024-11-25 12:23:35.867503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.861 [2024-11-25 12:23:35.867608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:34.861 [2024-11-25 12:23:35.867656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.667 ms 00:25:34.861 [2024-11-25 12:23:35.867677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.861 [2024-11-25 12:23:35.891259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.861 [2024-11-25 12:23:35.891376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:34.861 [2024-11-25 12:23:35.891430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.512 ms 00:25:34.861 [2024-11-25 12:23:35.891453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.861 [2024-11-25 12:23:35.905019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.861 [2024-11-25 12:23:35.905125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:34.861 [2024-11-25 12:23:35.905175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.505 ms 00:25:34.861 [2024-11-25 12:23:35.905197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.862 [2024-11-25 12:23:35.906517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.862 [2024-11-25 12:23:35.906608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:34.862 [2024-11-25 12:23:35.906655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.285 ms 00:25:34.862 [2024-11-25 12:23:35.906722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.862 [2024-11-25 12:23:35.929349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.862 [2024-11-25 12:23:35.929451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:34.862 [2024-11-25 12:23:35.929594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.596 ms 00:25:34.862 [2024-11-25 12:23:35.929618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.121 [2024-11-25 12:23:35.951921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.121 [2024-11-25 12:23:35.952048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:35.121 [2024-11-25 12:23:35.952152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.243 ms 00:25:35.121 [2024-11-25 12:23:35.952206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.121 [2024-11-25 12:23:35.974570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.121 [2024-11-25 12:23:35.974684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:35.121 [2024-11-25 12:23:35.974698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.329 ms 00:25:35.121 [2024-11-25 12:23:35.974705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.121 [2024-11-25 12:23:35.997104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.121 [2024-11-25 12:23:35.997218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:35.121 [2024-11-25 12:23:35.997274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.359 ms 00:25:35.121 [2024-11-25 12:23:35.997359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.121 [2024-11-25 12:23:35.997392] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:35.121 [2024-11-25 12:23:35.997439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:35.121 [2024-11-25 12:23:35.997562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:25:35.121 [2024-11-25 12:23:35.997595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:35.121 [2024-11-25 12:23:35.997623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:35.121 [2024-11-25 12:23:35.997683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:35.121 [2024-11-25 12:23:35.997733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:35.121 [2024-11-25 12:23:35.997761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:35.121 [2024-11-25 12:23:35.997834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:35.121 [2024-11-25 12:23:35.997865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:35.121 [2024-11-25 12:23:35.997893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:35.121 [2024-11-25 12:23:35.997921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:35.121 [2024-11-25 12:23:35.998001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:35.121 [2024-11-25 12:23:35.998032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:35.121 [2024-11-25 12:23:35.998060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:35.121 [2024-11-25 12:23:35.998107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:35.121 [2024-11-25 12:23:35.998145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:35.121 [2024-11-25 12:23:35.998174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:35.121 [2024-11-25 12:23:35.998201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:35.121 [2024-11-25 12:23:35.998279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:35.121 [2024-11-25 12:23:35.998308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:35.121 [2024-11-25 12:23:35.998336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:35.121 [2024-11-25 12:23:35.998399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:35.121 [2024-11-25 12:23:35.998428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:35.121 [2024-11-25 12:23:35.998456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:35.121 [2024-11-25 12:23:35.998503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:35.121 [2024-11-25 12:23:35.998554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:35.121 [2024-11-25 12:23:35.998650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:35.121 [2024-11-25 12:23:35.998679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:35.121 [2024-11-25 12:23:35.998747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:35.121 [2024-11-25 12:23:35.998756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:35.121 [2024-11-25 12:23:35.998763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:35.121 [2024-11-25 12:23:35.998771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:35.121 [2024-11-25 12:23:35.998779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:35.121 [2024-11-25 12:23:35.998786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:35.121 [2024-11-25 12:23:35.998793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:35.121 [2024-11-25 12:23:35.998802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:35.121 [2024-11-25 12:23:35.998809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:35.121 [2024-11-25 12:23:35.998817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:35.121 [2024-11-25 12:23:35.998824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:35.121 [2024-11-25 12:23:35.998831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:35.121 [2024-11-25 12:23:35.998839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:35.121 [2024-11-25 12:23:35.998846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:35.121 [2024-11-25 12:23:35.998854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:35.121 [2024-11-25 12:23:35.998861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:35.121 [2024-11-25 12:23:35.998868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:35.122 [2024-11-25 12:23:35.998875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:35.122 [2024-11-25 12:23:35.998883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:35.122 [2024-11-25 12:23:35.998890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:35.122 [2024-11-25 12:23:35.998898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:35.122 [2024-11-25 12:23:35.998905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:35.122 [2024-11-25 12:23:35.998912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:35.122 [2024-11-25 12:23:35.998919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:35.122 [2024-11-25 12:23:35.998927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:35.122 [2024-11-25 12:23:35.998934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:35.122 [2024-11-25 12:23:35.998941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:35.122 [2024-11-25 12:23:35.998957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:35.122 [2024-11-25 12:23:35.998965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:35.122 [2024-11-25 12:23:35.998973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:35.122 [2024-11-25 12:23:35.998981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:35.122 [2024-11-25 12:23:35.998989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:35.122 [2024-11-25 12:23:35.998996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:35.122 [2024-11-25 12:23:35.999003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:35.122 [2024-11-25 12:23:35.999011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:35.122 [2024-11-25 12:23:35.999018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:35.122 [2024-11-25 12:23:35.999025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:35.122 [2024-11-25 12:23:35.999033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:35.122 [2024-11-25 12:23:35.999040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:35.122 [2024-11-25 12:23:35.999048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:35.122 [2024-11-25 12:23:35.999056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:35.122 [2024-11-25 12:23:35.999063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:35.122 [2024-11-25 12:23:35.999071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:35.122 [2024-11-25 12:23:35.999078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:35.122 [2024-11-25 12:23:35.999085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:35.122 [2024-11-25 12:23:35.999093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:35.122 [2024-11-25 12:23:35.999100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:35.122 [2024-11-25 12:23:35.999108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:35.122 [2024-11-25 12:23:35.999115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:35.122 [2024-11-25 12:23:35.999122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:35.122 [2024-11-25 12:23:35.999129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:35.122 [2024-11-25 12:23:35.999136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:35.122 [2024-11-25 12:23:35.999143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:35.122 [2024-11-25 12:23:35.999150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:35.122 [2024-11-25 12:23:35.999157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:35.122 [2024-11-25 12:23:35.999165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:35.122 [2024-11-25 12:23:35.999172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:35.122 [2024-11-25 12:23:35.999179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:35.122 [2024-11-25 12:23:35.999186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:35.122 [2024-11-25 12:23:35.999194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:35.122 [2024-11-25 12:23:35.999202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:35.122 [2024-11-25 12:23:35.999209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:35.122 [2024-11-25 12:23:35.999216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:35.122 [2024-11-25 12:23:35.999223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:35.122 [2024-11-25 12:23:35.999230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:35.122 [2024-11-25 12:23:35.999237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:35.122 [2024-11-25 12:23:35.999245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:35.122 [2024-11-25 12:23:35.999251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:35.122 [2024-11-25 12:23:35.999258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:35.122 [2024-11-25 12:23:35.999266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:35.122 [2024-11-25 12:23:35.999273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:35.122 [2024-11-25 12:23:35.999285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:35.122 [2024-11-25 12:23:35.999301] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:35.122 [2024-11-25 12:23:35.999313] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: aed19bdd-3515-46f8-96d0-b2af87e28583 00:25:35.122 [2024-11-25 12:23:35.999320] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:25:35.122 [2024-11-25 12:23:35.999327] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:35.122 [2024-11-25 12:23:35.999335] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:35.122 [2024-11-25 12:23:35.999342] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:35.122 [2024-11-25 12:23:35.999349] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:35.122 [2024-11-25 12:23:35.999357] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:35.122 [2024-11-25 12:23:35.999370] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:35.122 [2024-11-25 12:23:35.999376] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:35.122 [2024-11-25 12:23:35.999382] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:35.122 [2024-11-25 12:23:35.999389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.122 [2024-11-25 12:23:35.999397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:35.122 [2024-11-25 12:23:35.999406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.998 ms 00:25:35.122 [2024-11-25 12:23:35.999413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.122 [2024-11-25 12:23:36.011809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.122 [2024-11-25 12:23:36.011840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:35.122 [2024-11-25 12:23:36.011850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.375 ms 00:25:35.122 [2024-11-25 12:23:36.011858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.122 [2024-11-25 12:23:36.012225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.122 [2024-11-25 12:23:36.012239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:35.122 [2024-11-25 12:23:36.012251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.336 ms 00:25:35.122 [2024-11-25 12:23:36.012258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.122 [2024-11-25 12:23:36.044669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.122 [2024-11-25 12:23:36.044794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:35.122 [2024-11-25 12:23:36.044808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.122 [2024-11-25 12:23:36.044816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.122 [2024-11-25 12:23:36.044871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.122 [2024-11-25 12:23:36.044879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:35.122 [2024-11-25 12:23:36.044891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.122 [2024-11-25 12:23:36.044898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.122 [2024-11-25 12:23:36.044971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.122 [2024-11-25 12:23:36.044981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:35.122 [2024-11-25 12:23:36.044989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.122 [2024-11-25 12:23:36.044996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.122 [2024-11-25 12:23:36.045011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.122 [2024-11-25 12:23:36.045019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:35.122 [2024-11-25 12:23:36.045026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.122 [2024-11-25 12:23:36.045036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.122 [2024-11-25 12:23:36.121302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.123 [2024-11-25 12:23:36.121348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:35.123 [2024-11-25 12:23:36.121361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.123 [2024-11-25 12:23:36.121370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.123 [2024-11-25 12:23:36.183901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.123 [2024-11-25 12:23:36.184095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:35.123 [2024-11-25 12:23:36.184112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.123 [2024-11-25 12:23:36.184125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.123 [2024-11-25 12:23:36.184197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.123 [2024-11-25 12:23:36.184207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:35.123 [2024-11-25 12:23:36.184215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.123 [2024-11-25 12:23:36.184222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.123 [2024-11-25 12:23:36.184253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.123 [2024-11-25 12:23:36.184260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:35.123 [2024-11-25 12:23:36.184268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.123 [2024-11-25 12:23:36.184275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.123 [2024-11-25 12:23:36.184363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.123 [2024-11-25 12:23:36.184373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:35.123 [2024-11-25 12:23:36.184380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.123 [2024-11-25 12:23:36.184387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.123 [2024-11-25 12:23:36.184414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.123 [2024-11-25 12:23:36.184422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:35.123 [2024-11-25 12:23:36.184430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.123 [2024-11-25 12:23:36.184437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.123 [2024-11-25 12:23:36.184472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.123 [2024-11-25 12:23:36.184481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:35.123 [2024-11-25 12:23:36.184488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.123 [2024-11-25 12:23:36.184495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.123 [2024-11-25 12:23:36.184532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.123 [2024-11-25 12:23:36.184542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:35.123 [2024-11-25 12:23:36.184549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.123 [2024-11-25 12:23:36.184557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.123 [2024-11-25 12:23:36.184662] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 333.604 ms, result 0 00:25:36.055 00:25:36.055 00:25:36.055 12:23:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:25:37.956 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:25:37.956 12:23:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:25:37.956 12:23:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:25:37.956 12:23:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:37.956 12:23:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:38.215 12:23:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:25:38.215 12:23:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:38.215 12:23:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:25:38.215 Process with pid 78586 is not found 00:25:38.215 12:23:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 78586 00:25:38.215 12:23:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 78586 ']' 00:25:38.215 12:23:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 78586 00:25:38.215 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (78586) - No such process 00:25:38.215 12:23:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 78586 is not found' 00:25:38.215 12:23:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:25:38.472 Remove shared memory files 00:25:38.472 12:23:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:25:38.472 12:23:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:25:38.472 12:23:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:25:38.472 12:23:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:25:38.472 12:23:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:25:38.472 12:23:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:25:38.472 12:23:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:25:38.472 ************************************ 00:25:38.472 END TEST ftl_dirty_shutdown 00:25:38.472 ************************************ 00:25:38.472 00:25:38.472 real 2m20.435s 00:25:38.472 user 2m39.163s 00:25:38.472 sys 0m23.809s 00:25:38.472 12:23:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:38.472 12:23:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:38.472 12:23:39 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:25:38.472 12:23:39 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:25:38.472 12:23:39 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:38.472 12:23:39 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:38.731 ************************************ 00:25:38.731 START TEST ftl_upgrade_shutdown 00:25:38.731 ************************************ 00:25:38.731 12:23:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:25:38.731 * Looking for test storage... 00:25:38.731 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:38.731 12:23:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:38.731 12:23:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:25:38.731 12:23:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:38.731 12:23:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:38.731 12:23:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:38.731 12:23:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:38.731 12:23:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:38.731 12:23:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:25:38.731 12:23:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:25:38.731 12:23:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:25:38.731 12:23:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:25:38.731 12:23:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:25:38.731 12:23:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:25:38.731 12:23:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:25:38.731 12:23:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:38.731 12:23:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:25:38.731 12:23:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:25:38.731 12:23:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:38.731 12:23:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:38.731 12:23:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:25:38.731 12:23:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:25:38.731 12:23:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:38.731 12:23:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:25:38.731 12:23:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:25:38.731 12:23:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:25:38.731 12:23:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:25:38.731 12:23:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:38.731 12:23:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:25:38.731 12:23:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:25:38.731 12:23:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:38.731 12:23:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:38.731 12:23:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:25:38.731 12:23:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:38.731 12:23:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:38.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:38.731 --rc genhtml_branch_coverage=1 00:25:38.731 --rc genhtml_function_coverage=1 00:25:38.731 --rc genhtml_legend=1 00:25:38.731 --rc geninfo_all_blocks=1 00:25:38.731 --rc geninfo_unexecuted_blocks=1 00:25:38.731 00:25:38.731 ' 00:25:38.731 12:23:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:38.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:38.731 --rc genhtml_branch_coverage=1 00:25:38.731 --rc genhtml_function_coverage=1 00:25:38.731 --rc genhtml_legend=1 00:25:38.731 --rc geninfo_all_blocks=1 00:25:38.731 --rc geninfo_unexecuted_blocks=1 00:25:38.731 00:25:38.731 ' 00:25:38.731 12:23:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:38.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:38.731 --rc genhtml_branch_coverage=1 00:25:38.731 --rc genhtml_function_coverage=1 00:25:38.731 --rc genhtml_legend=1 00:25:38.731 --rc geninfo_all_blocks=1 00:25:38.731 --rc geninfo_unexecuted_blocks=1 00:25:38.731 00:25:38.731 ' 00:25:38.731 12:23:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:38.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:38.731 --rc genhtml_branch_coverage=1 00:25:38.731 --rc genhtml_function_coverage=1 00:25:38.731 --rc genhtml_legend=1 00:25:38.731 --rc geninfo_all_blocks=1 00:25:38.731 --rc geninfo_unexecuted_blocks=1 00:25:38.731 00:25:38.731 ' 00:25:38.731 12:23:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:38.731 12:23:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:25:38.731 12:23:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:38.731 12:23:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:38.731 12:23:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:38.731 12:23:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:38.731 12:23:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:38.731 12:23:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:38.731 12:23:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:38.731 12:23:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:38.731 12:23:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:38.731 12:23:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:38.731 12:23:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:38.731 12:23:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:38.731 12:23:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:38.731 12:23:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:38.731 12:23:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:38.731 12:23:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:38.731 12:23:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:38.731 12:23:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:38.731 12:23:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:38.731 12:23:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:38.731 12:23:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:38.731 12:23:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:38.731 12:23:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:38.732 12:23:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:38.732 12:23:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:38.732 12:23:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:38.732 12:23:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:38.732 12:23:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:38.732 12:23:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:25:38.732 12:23:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:25:38.732 12:23:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:25:38.732 12:23:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:25:38.732 12:23:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:25:38.732 12:23:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:25:38.732 12:23:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:25:38.732 12:23:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:25:38.732 12:23:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:25:38.732 12:23:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:25:38.732 12:23:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:25:38.732 12:23:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:25:38.732 12:23:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:25:38.732 12:23:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:25:38.732 12:23:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:25:38.732 12:23:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:25:38.732 12:23:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=80156 00:25:38.732 12:23:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:25:38.732 12:23:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 80156 00:25:38.732 12:23:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 80156 ']' 00:25:38.732 12:23:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:25:38.732 12:23:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:38.732 12:23:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:38.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:38.732 12:23:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:38.732 12:23:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:38.732 12:23:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:38.990 [2024-11-25 12:23:39.822704] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:25:38.990 [2024-11-25 12:23:39.822916] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80156 ] 00:25:38.990 [2024-11-25 12:23:39.983763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:39.249 [2024-11-25 12:23:40.077373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:39.816 12:23:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:39.816 12:23:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:25:39.816 12:23:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:25:39.816 12:23:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:25:39.816 12:23:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:25:39.816 12:23:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:39.816 12:23:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:25:39.816 12:23:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:39.816 12:23:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:25:39.816 12:23:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:39.816 12:23:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:25:39.816 12:23:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:39.816 12:23:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:25:39.816 12:23:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:39.816 12:23:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:25:39.816 12:23:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:39.816 12:23:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:25:39.816 12:23:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:25:39.816 12:23:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:25:39.816 12:23:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:25:39.816 12:23:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:25:39.816 12:23:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:25:39.816 12:23:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:25:40.075 12:23:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:25:40.075 12:23:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:25:40.075 12:23:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:25:40.075 12:23:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:25:40.075 12:23:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:40.075 12:23:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:40.075 12:23:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:40.075 12:23:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:25:40.075 12:23:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:40.075 { 00:25:40.075 "name": "basen1", 00:25:40.075 "aliases": [ 00:25:40.075 "d14a2147-c744-4ac5-96d1-626b28d37f13" 00:25:40.075 ], 00:25:40.075 "product_name": "NVMe disk", 00:25:40.075 "block_size": 4096, 00:25:40.075 "num_blocks": 1310720, 00:25:40.075 "uuid": "d14a2147-c744-4ac5-96d1-626b28d37f13", 00:25:40.075 "numa_id": -1, 00:25:40.075 "assigned_rate_limits": { 00:25:40.075 "rw_ios_per_sec": 0, 00:25:40.075 "rw_mbytes_per_sec": 0, 00:25:40.075 "r_mbytes_per_sec": 0, 00:25:40.075 "w_mbytes_per_sec": 0 00:25:40.075 }, 00:25:40.075 "claimed": true, 00:25:40.075 "claim_type": "read_many_write_one", 00:25:40.075 "zoned": false, 00:25:40.075 "supported_io_types": { 00:25:40.075 "read": true, 00:25:40.075 "write": true, 00:25:40.075 "unmap": true, 00:25:40.075 "flush": true, 00:25:40.075 "reset": true, 00:25:40.075 "nvme_admin": true, 00:25:40.075 "nvme_io": true, 00:25:40.075 "nvme_io_md": false, 00:25:40.075 "write_zeroes": true, 00:25:40.075 "zcopy": false, 00:25:40.075 "get_zone_info": false, 00:25:40.075 "zone_management": false, 00:25:40.075 "zone_append": false, 00:25:40.075 "compare": true, 00:25:40.075 "compare_and_write": false, 00:25:40.075 "abort": true, 00:25:40.075 "seek_hole": false, 00:25:40.075 "seek_data": false, 00:25:40.075 "copy": true, 00:25:40.075 "nvme_iov_md": false 00:25:40.075 }, 00:25:40.075 "driver_specific": { 00:25:40.075 "nvme": [ 00:25:40.075 { 00:25:40.075 "pci_address": "0000:00:11.0", 00:25:40.075 "trid": { 00:25:40.075 "trtype": "PCIe", 00:25:40.075 "traddr": "0000:00:11.0" 00:25:40.075 }, 00:25:40.075 "ctrlr_data": { 00:25:40.075 "cntlid": 0, 00:25:40.075 "vendor_id": "0x1b36", 00:25:40.075 "model_number": "QEMU NVMe Ctrl", 00:25:40.075 "serial_number": "12341", 00:25:40.075 "firmware_revision": "8.0.0", 00:25:40.075 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:40.075 "oacs": { 00:25:40.075 "security": 0, 00:25:40.075 "format": 1, 00:25:40.075 "firmware": 0, 00:25:40.075 "ns_manage": 1 00:25:40.075 }, 00:25:40.075 "multi_ctrlr": false, 00:25:40.075 "ana_reporting": false 00:25:40.075 }, 00:25:40.075 "vs": { 00:25:40.075 "nvme_version": "1.4" 00:25:40.075 }, 00:25:40.076 "ns_data": { 00:25:40.076 "id": 1, 00:25:40.076 "can_share": false 00:25:40.076 } 00:25:40.076 } 00:25:40.076 ], 00:25:40.076 "mp_policy": "active_passive" 00:25:40.076 } 00:25:40.076 } 00:25:40.076 ]' 00:25:40.076 12:23:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:40.334 12:23:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:40.334 12:23:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:40.334 12:23:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:25:40.334 12:23:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:25:40.334 12:23:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:25:40.334 12:23:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:25:40.334 12:23:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:25:40.334 12:23:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:25:40.334 12:23:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:40.334 12:23:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:40.593 12:23:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=26a1c1e9-18d7-4e13-8ad2-7f32c7af9c68 00:25:40.593 12:23:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:25:40.593 12:23:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 26a1c1e9-18d7-4e13-8ad2-7f32c7af9c68 00:25:40.593 12:23:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:25:40.851 12:23:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=80944a95-3a60-4975-8ee7-fe87d3af0ee3 00:25:40.851 12:23:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 80944a95-3a60-4975-8ee7-fe87d3af0ee3 00:25:41.108 12:23:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=621db7f4-d970-4a96-a802-23db6752b970 00:25:41.109 12:23:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 621db7f4-d970-4a96-a802-23db6752b970 ]] 00:25:41.109 12:23:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 621db7f4-d970-4a96-a802-23db6752b970 5120 00:25:41.109 12:23:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:25:41.109 12:23:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:25:41.109 12:23:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=621db7f4-d970-4a96-a802-23db6752b970 00:25:41.109 12:23:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:25:41.109 12:23:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 621db7f4-d970-4a96-a802-23db6752b970 00:25:41.109 12:23:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=621db7f4-d970-4a96-a802-23db6752b970 00:25:41.109 12:23:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:41.109 12:23:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:41.109 12:23:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:41.109 12:23:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 621db7f4-d970-4a96-a802-23db6752b970 00:25:41.367 12:23:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:41.367 { 00:25:41.367 "name": "621db7f4-d970-4a96-a802-23db6752b970", 00:25:41.367 "aliases": [ 00:25:41.367 "lvs/basen1p0" 00:25:41.367 ], 00:25:41.367 "product_name": "Logical Volume", 00:25:41.367 "block_size": 4096, 00:25:41.367 "num_blocks": 5242880, 00:25:41.367 "uuid": "621db7f4-d970-4a96-a802-23db6752b970", 00:25:41.367 "assigned_rate_limits": { 00:25:41.367 "rw_ios_per_sec": 0, 00:25:41.367 "rw_mbytes_per_sec": 0, 00:25:41.367 "r_mbytes_per_sec": 0, 00:25:41.367 "w_mbytes_per_sec": 0 00:25:41.367 }, 00:25:41.367 "claimed": false, 00:25:41.367 "zoned": false, 00:25:41.367 "supported_io_types": { 00:25:41.367 "read": true, 00:25:41.367 "write": true, 00:25:41.367 "unmap": true, 00:25:41.367 "flush": false, 00:25:41.367 "reset": true, 00:25:41.367 "nvme_admin": false, 00:25:41.367 "nvme_io": false, 00:25:41.367 "nvme_io_md": false, 00:25:41.367 "write_zeroes": true, 00:25:41.367 "zcopy": false, 00:25:41.367 "get_zone_info": false, 00:25:41.367 "zone_management": false, 00:25:41.367 "zone_append": false, 00:25:41.367 "compare": false, 00:25:41.367 "compare_and_write": false, 00:25:41.367 "abort": false, 00:25:41.367 "seek_hole": true, 00:25:41.367 "seek_data": true, 00:25:41.367 "copy": false, 00:25:41.367 "nvme_iov_md": false 00:25:41.367 }, 00:25:41.367 "driver_specific": { 00:25:41.367 "lvol": { 00:25:41.367 "lvol_store_uuid": "80944a95-3a60-4975-8ee7-fe87d3af0ee3", 00:25:41.367 "base_bdev": "basen1", 00:25:41.367 "thin_provision": true, 00:25:41.367 "num_allocated_clusters": 0, 00:25:41.367 "snapshot": false, 00:25:41.367 "clone": false, 00:25:41.367 "esnap_clone": false 00:25:41.367 } 00:25:41.367 } 00:25:41.367 } 00:25:41.367 ]' 00:25:41.367 12:23:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:41.367 12:23:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:41.367 12:23:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:41.367 12:23:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:25:41.367 12:23:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:25:41.367 12:23:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:25:41.367 12:23:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:25:41.367 12:23:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:25:41.367 12:23:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:25:41.625 12:23:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:25:41.625 12:23:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:25:41.625 12:23:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:25:41.884 12:23:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:25:41.884 12:23:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:25:41.884 12:23:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 621db7f4-d970-4a96-a802-23db6752b970 -c cachen1p0 --l2p_dram_limit 2 00:25:42.161 [2024-11-25 12:23:42.987184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:42.161 [2024-11-25 12:23:42.987378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:25:42.161 [2024-11-25 12:23:42.987402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:25:42.161 [2024-11-25 12:23:42.987411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:42.161 [2024-11-25 12:23:42.987472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:42.161 [2024-11-25 12:23:42.987481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:25:42.161 [2024-11-25 12:23:42.987491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:25:42.161 [2024-11-25 12:23:42.987498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:42.161 [2024-11-25 12:23:42.987528] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:25:42.161 [2024-11-25 12:23:42.988246] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:25:42.161 [2024-11-25 12:23:42.988264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:42.161 [2024-11-25 12:23:42.988272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:25:42.161 [2024-11-25 12:23:42.988282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.740 ms 00:25:42.161 [2024-11-25 12:23:42.988289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:42.161 [2024-11-25 12:23:42.988323] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 72c87e5a-cf6d-47f0-8042-af974f69bd8c 00:25:42.161 [2024-11-25 12:23:42.989389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:42.161 [2024-11-25 12:23:42.989424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:25:42.161 [2024-11-25 12:23:42.989434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:25:42.161 [2024-11-25 12:23:42.989444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:42.161 [2024-11-25 12:23:42.994380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:42.161 [2024-11-25 12:23:42.994505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:25:42.161 [2024-11-25 12:23:42.994522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.855 ms 00:25:42.161 [2024-11-25 12:23:42.994532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:42.161 [2024-11-25 12:23:42.994569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:42.161 [2024-11-25 12:23:42.994579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:25:42.161 [2024-11-25 12:23:42.994587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:25:42.161 [2024-11-25 12:23:42.994598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:42.161 [2024-11-25 12:23:42.994634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:42.161 [2024-11-25 12:23:42.994645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:25:42.161 [2024-11-25 12:23:42.994653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:25:42.161 [2024-11-25 12:23:42.994666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:42.161 [2024-11-25 12:23:42.994686] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:25:42.161 [2024-11-25 12:23:42.998226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:42.161 [2024-11-25 12:23:42.998254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:25:42.161 [2024-11-25 12:23:42.998266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.543 ms 00:25:42.161 [2024-11-25 12:23:42.998274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:42.161 [2024-11-25 12:23:42.998299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:42.161 [2024-11-25 12:23:42.998307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:25:42.161 [2024-11-25 12:23:42.998316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:25:42.161 [2024-11-25 12:23:42.998323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:42.161 [2024-11-25 12:23:42.998348] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:25:42.161 [2024-11-25 12:23:42.998481] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:25:42.161 [2024-11-25 12:23:42.998495] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:25:42.161 [2024-11-25 12:23:42.998506] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:25:42.161 [2024-11-25 12:23:42.998517] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:25:42.161 [2024-11-25 12:23:42.998526] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:25:42.161 [2024-11-25 12:23:42.998535] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:25:42.161 [2024-11-25 12:23:42.998543] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:25:42.161 [2024-11-25 12:23:42.998553] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:25:42.161 [2024-11-25 12:23:42.998560] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:25:42.161 [2024-11-25 12:23:42.998568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:42.161 [2024-11-25 12:23:42.998575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:25:42.161 [2024-11-25 12:23:42.998584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.221 ms 00:25:42.161 [2024-11-25 12:23:42.998591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:42.161 [2024-11-25 12:23:42.998680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:42.161 [2024-11-25 12:23:42.998689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:25:42.161 [2024-11-25 12:23:42.998699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.067 ms 00:25:42.161 [2024-11-25 12:23:42.998712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:42.161 [2024-11-25 12:23:42.998822] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:25:42.161 [2024-11-25 12:23:42.998832] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:25:42.161 [2024-11-25 12:23:42.998842] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:25:42.161 [2024-11-25 12:23:42.998850] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:42.161 [2024-11-25 12:23:42.998859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:25:42.161 [2024-11-25 12:23:42.998866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:25:42.161 [2024-11-25 12:23:42.998874] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:25:42.161 [2024-11-25 12:23:42.998881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:25:42.161 [2024-11-25 12:23:42.998889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:25:42.161 [2024-11-25 12:23:42.998895] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:42.161 [2024-11-25 12:23:42.998904] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:25:42.162 [2024-11-25 12:23:42.998911] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:25:42.162 [2024-11-25 12:23:42.998919] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:42.162 [2024-11-25 12:23:42.998925] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:25:42.162 [2024-11-25 12:23:42.998934] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:25:42.162 [2024-11-25 12:23:42.998940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:42.162 [2024-11-25 12:23:42.998968] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:25:42.162 [2024-11-25 12:23:42.998976] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:25:42.162 [2024-11-25 12:23:42.998985] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:42.162 [2024-11-25 12:23:42.998992] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:25:42.162 [2024-11-25 12:23:42.999003] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:25:42.162 [2024-11-25 12:23:42.999009] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:42.162 [2024-11-25 12:23:42.999018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:25:42.162 [2024-11-25 12:23:42.999025] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:25:42.162 [2024-11-25 12:23:42.999033] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:42.162 [2024-11-25 12:23:42.999040] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:25:42.162 [2024-11-25 12:23:42.999048] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:25:42.162 [2024-11-25 12:23:42.999054] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:42.162 [2024-11-25 12:23:42.999063] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:25:42.162 [2024-11-25 12:23:42.999070] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:25:42.162 [2024-11-25 12:23:42.999077] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:42.162 [2024-11-25 12:23:42.999084] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:25:42.162 [2024-11-25 12:23:42.999093] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:25:42.162 [2024-11-25 12:23:42.999100] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:42.162 [2024-11-25 12:23:42.999108] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:25:42.162 [2024-11-25 12:23:42.999115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:25:42.162 [2024-11-25 12:23:42.999123] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:42.162 [2024-11-25 12:23:42.999129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:25:42.162 [2024-11-25 12:23:42.999137] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:25:42.162 [2024-11-25 12:23:42.999144] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:42.162 [2024-11-25 12:23:42.999152] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:25:42.162 [2024-11-25 12:23:42.999158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:25:42.162 [2024-11-25 12:23:42.999166] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:42.162 [2024-11-25 12:23:42.999172] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:25:42.162 [2024-11-25 12:23:42.999181] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:25:42.162 [2024-11-25 12:23:42.999188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:25:42.162 [2024-11-25 12:23:42.999197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:42.162 [2024-11-25 12:23:42.999204] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:25:42.162 [2024-11-25 12:23:42.999215] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:25:42.162 [2024-11-25 12:23:42.999221] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:25:42.162 [2024-11-25 12:23:42.999229] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:25:42.162 [2024-11-25 12:23:42.999236] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:25:42.162 [2024-11-25 12:23:42.999244] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:25:42.162 [2024-11-25 12:23:42.999254] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:25:42.162 [2024-11-25 12:23:42.999265] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:42.162 [2024-11-25 12:23:42.999275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:25:42.162 [2024-11-25 12:23:42.999284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:25:42.162 [2024-11-25 12:23:42.999291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:25:42.162 [2024-11-25 12:23:42.999299] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:25:42.162 [2024-11-25 12:23:42.999306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:25:42.162 [2024-11-25 12:23:42.999315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:25:42.162 [2024-11-25 12:23:42.999322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:25:42.162 [2024-11-25 12:23:42.999331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:25:42.162 [2024-11-25 12:23:42.999338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:25:42.162 [2024-11-25 12:23:42.999348] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:25:42.162 [2024-11-25 12:23:42.999355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:25:42.162 [2024-11-25 12:23:42.999363] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:25:42.162 [2024-11-25 12:23:42.999370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:25:42.162 [2024-11-25 12:23:42.999379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:25:42.162 [2024-11-25 12:23:42.999386] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:25:42.162 [2024-11-25 12:23:42.999396] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:42.162 [2024-11-25 12:23:42.999404] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:42.162 [2024-11-25 12:23:42.999413] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:25:42.162 [2024-11-25 12:23:42.999420] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:25:42.162 [2024-11-25 12:23:42.999428] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:25:42.162 [2024-11-25 12:23:42.999436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:42.162 [2024-11-25 12:23:42.999444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:25:42.162 [2024-11-25 12:23:42.999452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.683 ms 00:25:42.162 [2024-11-25 12:23:42.999460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:42.162 [2024-11-25 12:23:42.999497] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:25:42.162 [2024-11-25 12:23:42.999514] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:25:44.893 [2024-11-25 12:23:45.347780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:44.893 [2024-11-25 12:23:45.347838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:25:44.893 [2024-11-25 12:23:45.347854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2348.272 ms 00:25:44.893 [2024-11-25 12:23:45.347864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:44.893 [2024-11-25 12:23:45.372906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:44.893 [2024-11-25 12:23:45.373104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:25:44.893 [2024-11-25 12:23:45.373124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.822 ms 00:25:44.893 [2024-11-25 12:23:45.373134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:44.893 [2024-11-25 12:23:45.373212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:44.893 [2024-11-25 12:23:45.373224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:25:44.893 [2024-11-25 12:23:45.373232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:25:44.893 [2024-11-25 12:23:45.373244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:44.893 [2024-11-25 12:23:45.403450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:44.893 [2024-11-25 12:23:45.403487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:25:44.893 [2024-11-25 12:23:45.403497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.171 ms 00:25:44.893 [2024-11-25 12:23:45.403507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:44.893 [2024-11-25 12:23:45.403535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:44.893 [2024-11-25 12:23:45.403549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:25:44.893 [2024-11-25 12:23:45.403556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:25:44.893 [2024-11-25 12:23:45.403565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:44.893 [2024-11-25 12:23:45.403891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:44.893 [2024-11-25 12:23:45.403910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:25:44.893 [2024-11-25 12:23:45.403918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.281 ms 00:25:44.893 [2024-11-25 12:23:45.403928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:44.893 [2024-11-25 12:23:45.403990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:44.893 [2024-11-25 12:23:45.404001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:25:44.893 [2024-11-25 12:23:45.404011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:25:44.893 [2024-11-25 12:23:45.404022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:44.893 [2024-11-25 12:23:45.417751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:44.893 [2024-11-25 12:23:45.417785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:25:44.893 [2024-11-25 12:23:45.417795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.711 ms 00:25:44.893 [2024-11-25 12:23:45.417804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:44.893 [2024-11-25 12:23:45.429026] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:25:44.893 [2024-11-25 12:23:45.429846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:44.893 [2024-11-25 12:23:45.429875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:25:44.893 [2024-11-25 12:23:45.429887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.957 ms 00:25:44.893 [2024-11-25 12:23:45.429894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:44.893 [2024-11-25 12:23:45.608116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:44.893 [2024-11-25 12:23:45.608172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:25:44.893 [2024-11-25 12:23:45.608189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 178.191 ms 00:25:44.893 [2024-11-25 12:23:45.608197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:44.893 [2024-11-25 12:23:45.608275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:44.893 [2024-11-25 12:23:45.608287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:25:44.893 [2024-11-25 12:23:45.608299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.047 ms 00:25:44.893 [2024-11-25 12:23:45.608306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:44.893 [2024-11-25 12:23:45.631160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:44.893 [2024-11-25 12:23:45.631205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:25:44.893 [2024-11-25 12:23:45.631220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.798 ms 00:25:44.893 [2024-11-25 12:23:45.631229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:44.893 [2024-11-25 12:23:45.653626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:44.893 [2024-11-25 12:23:45.653668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:25:44.893 [2024-11-25 12:23:45.653681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.358 ms 00:25:44.893 [2024-11-25 12:23:45.653689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:44.893 [2024-11-25 12:23:45.654266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:44.893 [2024-11-25 12:23:45.654282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:25:44.893 [2024-11-25 12:23:45.654292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.549 ms 00:25:44.893 [2024-11-25 12:23:45.654300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:44.893 [2024-11-25 12:23:45.722192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:44.893 [2024-11-25 12:23:45.722247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:25:44.893 [2024-11-25 12:23:45.722267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 67.842 ms 00:25:44.893 [2024-11-25 12:23:45.722277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:44.893 [2024-11-25 12:23:45.746679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:44.893 [2024-11-25 12:23:45.746743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:25:44.893 [2024-11-25 12:23:45.746764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.310 ms 00:25:44.893 [2024-11-25 12:23:45.746772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:44.893 [2024-11-25 12:23:45.807768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:44.893 [2024-11-25 12:23:45.807824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:25:44.893 [2024-11-25 12:23:45.807839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 60.948 ms 00:25:44.893 [2024-11-25 12:23:45.807847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:44.893 [2024-11-25 12:23:45.830953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:44.893 [2024-11-25 12:23:45.830994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:25:44.893 [2024-11-25 12:23:45.831008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.047 ms 00:25:44.893 [2024-11-25 12:23:45.831016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:44.893 [2024-11-25 12:23:45.831063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:44.893 [2024-11-25 12:23:45.831073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:25:44.893 [2024-11-25 12:23:45.831085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:25:44.893 [2024-11-25 12:23:45.831092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:44.894 [2024-11-25 12:23:45.831187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:44.894 [2024-11-25 12:23:45.831198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:25:44.894 [2024-11-25 12:23:45.831211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:25:44.894 [2024-11-25 12:23:45.831218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:44.894 [2024-11-25 12:23:45.832073] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2844.483 ms, result 0 00:25:44.894 { 00:25:44.894 "name": "ftl", 00:25:44.894 "uuid": "72c87e5a-cf6d-47f0-8042-af974f69bd8c" 00:25:44.894 } 00:25:44.894 12:23:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:25:45.151 [2024-11-25 12:23:46.039629] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:45.151 12:23:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:25:45.409 12:23:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:25:45.409 [2024-11-25 12:23:46.439894] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:25:45.409 12:23:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:25:45.667 [2024-11-25 12:23:46.644320] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:25:45.667 12:23:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:25:45.925 Fill FTL, iteration 1 00:25:45.925 12:23:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:25:45.925 12:23:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:25:45.925 12:23:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:25:45.925 12:23:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:25:45.925 12:23:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:25:45.925 12:23:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:25:45.925 12:23:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:25:45.925 12:23:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:25:45.925 12:23:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:25:45.925 12:23:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:25:45.925 12:23:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:25:45.925 12:23:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:25:45.925 12:23:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:45.925 12:23:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:45.925 12:23:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:45.925 12:23:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:25:45.925 12:23:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=80272 00:25:45.925 12:23:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:25:45.925 12:23:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:25:45.925 12:23:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 80272 /var/tmp/spdk.tgt.sock 00:25:45.925 12:23:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 80272 ']' 00:25:45.925 12:23:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:25:45.925 12:23:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:45.925 12:23:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:25:45.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:25:45.925 12:23:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:45.926 12:23:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:46.183 [2024-11-25 12:23:47.065494] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:25:46.183 [2024-11-25 12:23:47.066107] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80272 ] 00:25:46.183 [2024-11-25 12:23:47.224793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:46.442 [2024-11-25 12:23:47.322298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:47.007 12:23:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:47.007 12:23:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:25:47.007 12:23:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:25:47.264 ftln1 00:25:47.264 12:23:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:25:47.264 12:23:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:25:47.522 12:23:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:25:47.522 12:23:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 80272 00:25:47.522 12:23:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 80272 ']' 00:25:47.522 12:23:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 80272 00:25:47.522 12:23:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:25:47.522 12:23:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:47.522 12:23:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80272 00:25:47.522 killing process with pid 80272 00:25:47.522 12:23:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:47.522 12:23:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:47.522 12:23:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80272' 00:25:47.522 12:23:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 80272 00:25:47.522 12:23:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 80272 00:25:48.894 12:23:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:25:48.894 12:23:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:25:48.894 [2024-11-25 12:23:49.914620] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:25:48.894 [2024-11-25 12:23:49.914746] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80314 ] 00:25:49.152 [2024-11-25 12:23:50.072226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:49.152 [2024-11-25 12:23:50.173199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:50.526  [2024-11-25T12:23:52.609Z] Copying: 223/1024 [MB] (223 MBps) [2024-11-25T12:23:53.569Z] Copying: 452/1024 [MB] (229 MBps) [2024-11-25T12:23:54.945Z] Copying: 654/1024 [MB] (202 MBps) [2024-11-25T12:23:55.510Z] Copying: 843/1024 [MB] (189 MBps) [2024-11-25T12:23:56.445Z] Copying: 1024/1024 [MB] (average 209 MBps) 00:25:55.365 00:25:55.365 Calculate MD5 checksum, iteration 1 00:25:55.365 12:23:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:25:55.365 12:23:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:25:55.365 12:23:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:25:55.365 12:23:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:55.365 12:23:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:55.365 12:23:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:55.365 12:23:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:25:55.365 12:23:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:25:55.365 [2024-11-25 12:23:56.195035] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:25:55.365 [2024-11-25 12:23:56.195158] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80382 ] 00:25:55.365 [2024-11-25 12:23:56.356134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:55.651 [2024-11-25 12:23:56.456126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:57.025  [2024-11-25T12:23:58.363Z] Copying: 699/1024 [MB] (699 MBps) [2024-11-25T12:23:58.929Z] Copying: 1024/1024 [MB] (average 699 MBps) 00:25:57.849 00:25:57.849 12:23:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:25:57.849 12:23:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:00.377 12:24:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:26:00.377 Fill FTL, iteration 2 00:26:00.378 12:24:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=f30373ee9cf802757e4b11f0a8ff4937 00:26:00.378 12:24:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:26:00.378 12:24:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:26:00.378 12:24:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:26:00.378 12:24:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:26:00.378 12:24:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:00.378 12:24:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:00.378 12:24:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:00.378 12:24:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:00.378 12:24:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:26:00.378 [2024-11-25 12:24:00.947840] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:26:00.378 [2024-11-25 12:24:00.947976] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80439 ] 00:26:00.378 [2024-11-25 12:24:01.108780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:00.378 [2024-11-25 12:24:01.202008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:01.753  [2024-11-25T12:24:03.798Z] Copying: 210/1024 [MB] (210 MBps) [2024-11-25T12:24:04.732Z] Copying: 431/1024 [MB] (221 MBps) [2024-11-25T12:24:05.666Z] Copying: 676/1024 [MB] (245 MBps) [2024-11-25T12:24:06.233Z] Copying: 900/1024 [MB] (224 MBps) [2024-11-25T12:24:07.168Z] Copying: 1024/1024 [MB] (average 222 MBps) 00:26:06.088 00:26:06.088 Calculate MD5 checksum, iteration 2 00:26:06.088 12:24:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:26:06.088 12:24:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:26:06.088 12:24:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:06.088 12:24:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:06.088 12:24:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:06.088 12:24:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:06.088 12:24:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:06.089 12:24:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:06.089 [2024-11-25 12:24:06.940712] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:26:06.089 [2024-11-25 12:24:06.940830] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80497 ] 00:26:06.089 [2024-11-25 12:24:07.100240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:06.347 [2024-11-25 12:24:07.196712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:07.720  [2024-11-25T12:24:09.364Z] Copying: 680/1024 [MB] (680 MBps) [2024-11-25T12:24:10.298Z] Copying: 1024/1024 [MB] (average 685 MBps) 00:26:09.218 00:26:09.218 12:24:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:26:09.218 12:24:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:11.776 12:24:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:26:11.776 12:24:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=7a9d8e37983de03a59feee13d6fcefd5 00:26:11.776 12:24:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:26:11.776 12:24:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:26:11.776 12:24:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:26:11.776 [2024-11-25 12:24:12.597510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:11.776 [2024-11-25 12:24:12.597560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:11.776 [2024-11-25 12:24:12.597574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:26:11.776 [2024-11-25 12:24:12.597582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:11.776 [2024-11-25 12:24:12.597605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:11.776 [2024-11-25 12:24:12.597614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:11.776 [2024-11-25 12:24:12.597622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:11.776 [2024-11-25 12:24:12.597632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:11.776 [2024-11-25 12:24:12.597653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:11.776 [2024-11-25 12:24:12.597661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:11.776 [2024-11-25 12:24:12.597669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:11.776 [2024-11-25 12:24:12.597676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:11.776 [2024-11-25 12:24:12.597736] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.216 ms, result 0 00:26:11.776 true 00:26:11.776 12:24:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:11.776 { 00:26:11.776 "name": "ftl", 00:26:11.776 "properties": [ 00:26:11.776 { 00:26:11.776 "name": "superblock_version", 00:26:11.776 "value": 5, 00:26:11.776 "read-only": true 00:26:11.776 }, 00:26:11.776 { 00:26:11.776 "name": "base_device", 00:26:11.776 "bands": [ 00:26:11.776 { 00:26:11.776 "id": 0, 00:26:11.776 "state": "FREE", 00:26:11.776 "validity": 0.0 00:26:11.776 }, 00:26:11.776 { 00:26:11.776 "id": 1, 00:26:11.776 "state": "FREE", 00:26:11.776 "validity": 0.0 00:26:11.776 }, 00:26:11.776 { 00:26:11.776 "id": 2, 00:26:11.777 "state": "FREE", 00:26:11.777 "validity": 0.0 00:26:11.777 }, 00:26:11.777 { 00:26:11.777 "id": 3, 00:26:11.777 "state": "FREE", 00:26:11.777 "validity": 0.0 00:26:11.777 }, 00:26:11.777 { 00:26:11.777 "id": 4, 00:26:11.777 "state": "FREE", 00:26:11.777 "validity": 0.0 00:26:11.777 }, 00:26:11.777 { 00:26:11.777 "id": 5, 00:26:11.777 "state": "FREE", 00:26:11.777 "validity": 0.0 00:26:11.777 }, 00:26:11.777 { 00:26:11.777 "id": 6, 00:26:11.777 "state": "FREE", 00:26:11.777 "validity": 0.0 00:26:11.777 }, 00:26:11.777 { 00:26:11.777 "id": 7, 00:26:11.777 "state": "FREE", 00:26:11.777 "validity": 0.0 00:26:11.777 }, 00:26:11.777 { 00:26:11.777 "id": 8, 00:26:11.777 "state": "FREE", 00:26:11.777 "validity": 0.0 00:26:11.777 }, 00:26:11.777 { 00:26:11.777 "id": 9, 00:26:11.777 "state": "FREE", 00:26:11.777 "validity": 0.0 00:26:11.777 }, 00:26:11.777 { 00:26:11.777 "id": 10, 00:26:11.777 "state": "FREE", 00:26:11.777 "validity": 0.0 00:26:11.777 }, 00:26:11.777 { 00:26:11.777 "id": 11, 00:26:11.777 "state": "FREE", 00:26:11.777 "validity": 0.0 00:26:11.777 }, 00:26:11.777 { 00:26:11.777 "id": 12, 00:26:11.777 "state": "FREE", 00:26:11.777 "validity": 0.0 00:26:11.777 }, 00:26:11.777 { 00:26:11.777 "id": 13, 00:26:11.777 "state": "FREE", 00:26:11.777 "validity": 0.0 00:26:11.777 }, 00:26:11.777 { 00:26:11.777 "id": 14, 00:26:11.777 "state": "FREE", 00:26:11.777 "validity": 0.0 00:26:11.777 }, 00:26:11.777 { 00:26:11.777 "id": 15, 00:26:11.777 "state": "FREE", 00:26:11.777 "validity": 0.0 00:26:11.777 }, 00:26:11.777 { 00:26:11.777 "id": 16, 00:26:11.777 "state": "FREE", 00:26:11.777 "validity": 0.0 00:26:11.777 }, 00:26:11.777 { 00:26:11.777 "id": 17, 00:26:11.777 "state": "FREE", 00:26:11.777 "validity": 0.0 00:26:11.777 } 00:26:11.777 ], 00:26:11.777 "read-only": true 00:26:11.777 }, 00:26:11.777 { 00:26:11.777 "name": "cache_device", 00:26:11.777 "type": "bdev", 00:26:11.777 "chunks": [ 00:26:11.777 { 00:26:11.777 "id": 0, 00:26:11.777 "state": "INACTIVE", 00:26:11.777 "utilization": 0.0 00:26:11.777 }, 00:26:11.777 { 00:26:11.777 "id": 1, 00:26:11.777 "state": "CLOSED", 00:26:11.777 "utilization": 1.0 00:26:11.777 }, 00:26:11.777 { 00:26:11.777 "id": 2, 00:26:11.777 "state": "CLOSED", 00:26:11.777 "utilization": 1.0 00:26:11.777 }, 00:26:11.777 { 00:26:11.777 "id": 3, 00:26:11.777 "state": "OPEN", 00:26:11.777 "utilization": 0.001953125 00:26:11.777 }, 00:26:11.777 { 00:26:11.777 "id": 4, 00:26:11.777 "state": "OPEN", 00:26:11.777 "utilization": 0.0 00:26:11.777 } 00:26:11.777 ], 00:26:11.777 "read-only": true 00:26:11.777 }, 00:26:11.777 { 00:26:11.777 "name": "verbose_mode", 00:26:11.777 "value": true, 00:26:11.777 "unit": "", 00:26:11.777 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:26:11.777 }, 00:26:11.777 { 00:26:11.777 "name": "prep_upgrade_on_shutdown", 00:26:11.777 "value": false, 00:26:11.777 "unit": "", 00:26:11.777 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:26:11.777 } 00:26:11.777 ] 00:26:11.777 } 00:26:11.777 12:24:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:26:12.036 [2024-11-25 12:24:12.950505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:12.036 [2024-11-25 12:24:12.950706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:12.036 [2024-11-25 12:24:12.950778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:12.036 [2024-11-25 12:24:12.950802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:12.036 [2024-11-25 12:24:12.950844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:12.037 [2024-11-25 12:24:12.950866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:12.037 [2024-11-25 12:24:12.950916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:12.037 [2024-11-25 12:24:12.950937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:12.037 [2024-11-25 12:24:12.950990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:12.037 [2024-11-25 12:24:12.951014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:12.037 [2024-11-25 12:24:12.951066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:12.037 [2024-11-25 12:24:12.951087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:12.037 [2024-11-25 12:24:12.951162] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.638 ms, result 0 00:26:12.037 true 00:26:12.037 12:24:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:26:12.037 12:24:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:26:12.037 12:24:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:12.295 12:24:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:26:12.295 12:24:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:26:12.295 12:24:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:26:12.295 [2024-11-25 12:24:13.354973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:12.295 [2024-11-25 12:24:13.355147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:12.295 [2024-11-25 12:24:13.355205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:26:12.295 [2024-11-25 12:24:13.355229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:12.296 [2024-11-25 12:24:13.355269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:12.296 [2024-11-25 12:24:13.355290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:12.296 [2024-11-25 12:24:13.355309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:12.296 [2024-11-25 12:24:13.355327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:12.296 [2024-11-25 12:24:13.355358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:12.296 [2024-11-25 12:24:13.355377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:12.296 [2024-11-25 12:24:13.355424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:12.296 [2024-11-25 12:24:13.355447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:12.296 [2024-11-25 12:24:13.355518] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.548 ms, result 0 00:26:12.296 true 00:26:12.556 12:24:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:12.556 { 00:26:12.556 "name": "ftl", 00:26:12.556 "properties": [ 00:26:12.556 { 00:26:12.556 "name": "superblock_version", 00:26:12.556 "value": 5, 00:26:12.556 "read-only": true 00:26:12.556 }, 00:26:12.556 { 00:26:12.556 "name": "base_device", 00:26:12.556 "bands": [ 00:26:12.556 { 00:26:12.556 "id": 0, 00:26:12.556 "state": "FREE", 00:26:12.556 "validity": 0.0 00:26:12.556 }, 00:26:12.556 { 00:26:12.556 "id": 1, 00:26:12.556 "state": "FREE", 00:26:12.556 "validity": 0.0 00:26:12.556 }, 00:26:12.556 { 00:26:12.556 "id": 2, 00:26:12.556 "state": "FREE", 00:26:12.556 "validity": 0.0 00:26:12.556 }, 00:26:12.556 { 00:26:12.556 "id": 3, 00:26:12.556 "state": "FREE", 00:26:12.556 "validity": 0.0 00:26:12.556 }, 00:26:12.556 { 00:26:12.556 "id": 4, 00:26:12.556 "state": "FREE", 00:26:12.556 "validity": 0.0 00:26:12.556 }, 00:26:12.556 { 00:26:12.556 "id": 5, 00:26:12.556 "state": "FREE", 00:26:12.556 "validity": 0.0 00:26:12.556 }, 00:26:12.556 { 00:26:12.556 "id": 6, 00:26:12.556 "state": "FREE", 00:26:12.556 "validity": 0.0 00:26:12.556 }, 00:26:12.556 { 00:26:12.556 "id": 7, 00:26:12.556 "state": "FREE", 00:26:12.556 "validity": 0.0 00:26:12.556 }, 00:26:12.556 { 00:26:12.556 "id": 8, 00:26:12.556 "state": "FREE", 00:26:12.556 "validity": 0.0 00:26:12.556 }, 00:26:12.556 { 00:26:12.556 "id": 9, 00:26:12.556 "state": "FREE", 00:26:12.556 "validity": 0.0 00:26:12.556 }, 00:26:12.556 { 00:26:12.556 "id": 10, 00:26:12.556 "state": "FREE", 00:26:12.556 "validity": 0.0 00:26:12.556 }, 00:26:12.556 { 00:26:12.556 "id": 11, 00:26:12.556 "state": "FREE", 00:26:12.556 "validity": 0.0 00:26:12.556 }, 00:26:12.556 { 00:26:12.556 "id": 12, 00:26:12.556 "state": "FREE", 00:26:12.556 "validity": 0.0 00:26:12.556 }, 00:26:12.556 { 00:26:12.556 "id": 13, 00:26:12.556 "state": "FREE", 00:26:12.556 "validity": 0.0 00:26:12.556 }, 00:26:12.556 { 00:26:12.556 "id": 14, 00:26:12.556 "state": "FREE", 00:26:12.556 "validity": 0.0 00:26:12.556 }, 00:26:12.556 { 00:26:12.556 "id": 15, 00:26:12.556 "state": "FREE", 00:26:12.556 "validity": 0.0 00:26:12.556 }, 00:26:12.556 { 00:26:12.556 "id": 16, 00:26:12.556 "state": "FREE", 00:26:12.556 "validity": 0.0 00:26:12.556 }, 00:26:12.556 { 00:26:12.556 "id": 17, 00:26:12.556 "state": "FREE", 00:26:12.556 "validity": 0.0 00:26:12.556 } 00:26:12.556 ], 00:26:12.556 "read-only": true 00:26:12.556 }, 00:26:12.556 { 00:26:12.556 "name": "cache_device", 00:26:12.556 "type": "bdev", 00:26:12.556 "chunks": [ 00:26:12.556 { 00:26:12.556 "id": 0, 00:26:12.556 "state": "INACTIVE", 00:26:12.556 "utilization": 0.0 00:26:12.556 }, 00:26:12.556 { 00:26:12.556 "id": 1, 00:26:12.556 "state": "CLOSED", 00:26:12.556 "utilization": 1.0 00:26:12.556 }, 00:26:12.556 { 00:26:12.556 "id": 2, 00:26:12.556 "state": "CLOSED", 00:26:12.556 "utilization": 1.0 00:26:12.556 }, 00:26:12.556 { 00:26:12.556 "id": 3, 00:26:12.557 "state": "OPEN", 00:26:12.557 "utilization": 0.001953125 00:26:12.557 }, 00:26:12.557 { 00:26:12.557 "id": 4, 00:26:12.557 "state": "OPEN", 00:26:12.557 "utilization": 0.0 00:26:12.557 } 00:26:12.557 ], 00:26:12.557 "read-only": true 00:26:12.557 }, 00:26:12.557 { 00:26:12.557 "name": "verbose_mode", 00:26:12.557 "value": true, 00:26:12.557 "unit": "", 00:26:12.557 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:26:12.557 }, 00:26:12.557 { 00:26:12.557 "name": "prep_upgrade_on_shutdown", 00:26:12.557 "value": true, 00:26:12.557 "unit": "", 00:26:12.557 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:26:12.557 } 00:26:12.557 ] 00:26:12.557 } 00:26:12.557 12:24:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:26:12.557 12:24:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 80156 ]] 00:26:12.557 12:24:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 80156 00:26:12.557 12:24:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 80156 ']' 00:26:12.557 12:24:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 80156 00:26:12.557 12:24:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:26:12.557 12:24:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:12.557 12:24:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80156 00:26:12.557 killing process with pid 80156 00:26:12.557 12:24:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:12.557 12:24:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:12.557 12:24:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80156' 00:26:12.557 12:24:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 80156 00:26:12.557 12:24:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 80156 00:26:13.490 [2024-11-25 12:24:14.268271] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:26:13.490 [2024-11-25 12:24:14.280305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.491 [2024-11-25 12:24:14.280347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:26:13.491 [2024-11-25 12:24:14.280359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:13.491 [2024-11-25 12:24:14.280367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.491 [2024-11-25 12:24:14.280388] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:26:13.491 [2024-11-25 12:24:14.282993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.491 [2024-11-25 12:24:14.283020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:26:13.491 [2024-11-25 12:24:14.283031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.592 ms 00:26:13.491 [2024-11-25 12:24:14.283039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:21.599 [2024-11-25 12:24:21.441717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:21.599 [2024-11-25 12:24:21.441913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:26:21.599 [2024-11-25 12:24:21.441933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7158.627 ms 00:26:21.599 [2024-11-25 12:24:21.441961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:21.599 [2024-11-25 12:24:21.443257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:21.599 [2024-11-25 12:24:21.443283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:26:21.599 [2024-11-25 12:24:21.443293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.276 ms 00:26:21.599 [2024-11-25 12:24:21.443301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:21.599 [2024-11-25 12:24:21.444413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:21.599 [2024-11-25 12:24:21.444433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:26:21.599 [2024-11-25 12:24:21.444442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.087 ms 00:26:21.599 [2024-11-25 12:24:21.444449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:21.599 [2024-11-25 12:24:21.453862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:21.599 [2024-11-25 12:24:21.453893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:26:21.599 [2024-11-25 12:24:21.453902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.329 ms 00:26:21.600 [2024-11-25 12:24:21.453910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:21.600 [2024-11-25 12:24:21.459389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:21.600 [2024-11-25 12:24:21.459421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:26:21.600 [2024-11-25 12:24:21.459432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.449 ms 00:26:21.600 [2024-11-25 12:24:21.459441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:21.600 [2024-11-25 12:24:21.459509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:21.600 [2024-11-25 12:24:21.459520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:26:21.600 [2024-11-25 12:24:21.459534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:26:21.600 [2024-11-25 12:24:21.459542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:21.600 [2024-11-25 12:24:21.468446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:21.600 [2024-11-25 12:24:21.468475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:26:21.600 [2024-11-25 12:24:21.468485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.889 ms 00:26:21.600 [2024-11-25 12:24:21.468492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:21.600 [2024-11-25 12:24:21.477536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:21.600 [2024-11-25 12:24:21.477564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:26:21.600 [2024-11-25 12:24:21.477573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.014 ms 00:26:21.600 [2024-11-25 12:24:21.477581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:21.600 [2024-11-25 12:24:21.486560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:21.600 [2024-11-25 12:24:21.486588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:26:21.600 [2024-11-25 12:24:21.486598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.949 ms 00:26:21.600 [2024-11-25 12:24:21.486605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:21.600 [2024-11-25 12:24:21.495278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:21.600 [2024-11-25 12:24:21.495307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:26:21.600 [2024-11-25 12:24:21.495317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.617 ms 00:26:21.600 [2024-11-25 12:24:21.495324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:21.600 [2024-11-25 12:24:21.495354] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:26:21.600 [2024-11-25 12:24:21.495367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:21.600 [2024-11-25 12:24:21.495376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:26:21.600 [2024-11-25 12:24:21.495394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:26:21.600 [2024-11-25 12:24:21.495402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:21.600 [2024-11-25 12:24:21.495410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:21.600 [2024-11-25 12:24:21.495417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:21.600 [2024-11-25 12:24:21.495425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:21.600 [2024-11-25 12:24:21.495432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:21.600 [2024-11-25 12:24:21.495440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:21.600 [2024-11-25 12:24:21.495447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:21.600 [2024-11-25 12:24:21.495455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:21.600 [2024-11-25 12:24:21.495462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:21.600 [2024-11-25 12:24:21.495470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:21.600 [2024-11-25 12:24:21.495477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:21.600 [2024-11-25 12:24:21.495484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:21.600 [2024-11-25 12:24:21.495492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:21.600 [2024-11-25 12:24:21.495499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:21.600 [2024-11-25 12:24:21.495506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:21.600 [2024-11-25 12:24:21.495516] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:26:21.600 [2024-11-25 12:24:21.495523] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 72c87e5a-cf6d-47f0-8042-af974f69bd8c 00:26:21.600 [2024-11-25 12:24:21.495530] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:26:21.600 [2024-11-25 12:24:21.495537] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:26:21.600 [2024-11-25 12:24:21.495544] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:26:21.600 [2024-11-25 12:24:21.495552] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:26:21.600 [2024-11-25 12:24:21.495558] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:26:21.600 [2024-11-25 12:24:21.495569] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:26:21.600 [2024-11-25 12:24:21.495576] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:26:21.600 [2024-11-25 12:24:21.495582] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:26:21.600 [2024-11-25 12:24:21.495588] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:26:21.600 [2024-11-25 12:24:21.495596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:21.600 [2024-11-25 12:24:21.495607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:26:21.600 [2024-11-25 12:24:21.495614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.243 ms 00:26:21.600 [2024-11-25 12:24:21.495621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:21.600 [2024-11-25 12:24:21.508097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:21.600 [2024-11-25 12:24:21.508126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:26:21.600 [2024-11-25 12:24:21.508136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.461 ms 00:26:21.600 [2024-11-25 12:24:21.508148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:21.600 [2024-11-25 12:24:21.508486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:21.600 [2024-11-25 12:24:21.508494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:26:21.600 [2024-11-25 12:24:21.508502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.321 ms 00:26:21.600 [2024-11-25 12:24:21.508509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:21.600 [2024-11-25 12:24:21.549982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:21.600 [2024-11-25 12:24:21.550123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:21.600 [2024-11-25 12:24:21.550144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:21.600 [2024-11-25 12:24:21.550152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:21.600 [2024-11-25 12:24:21.550183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:21.600 [2024-11-25 12:24:21.550192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:21.600 [2024-11-25 12:24:21.550199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:21.600 [2024-11-25 12:24:21.550206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:21.600 [2024-11-25 12:24:21.550282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:21.600 [2024-11-25 12:24:21.550292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:21.600 [2024-11-25 12:24:21.550300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:21.600 [2024-11-25 12:24:21.550307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:21.600 [2024-11-25 12:24:21.550327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:21.600 [2024-11-25 12:24:21.550335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:21.601 [2024-11-25 12:24:21.550343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:21.601 [2024-11-25 12:24:21.550350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:21.601 [2024-11-25 12:24:21.627246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:21.601 [2024-11-25 12:24:21.627298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:21.601 [2024-11-25 12:24:21.627309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:21.601 [2024-11-25 12:24:21.627321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:21.601 [2024-11-25 12:24:21.690229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:21.601 [2024-11-25 12:24:21.690264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:21.601 [2024-11-25 12:24:21.690275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:21.601 [2024-11-25 12:24:21.690284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:21.601 [2024-11-25 12:24:21.690345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:21.601 [2024-11-25 12:24:21.690354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:21.601 [2024-11-25 12:24:21.690362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:21.601 [2024-11-25 12:24:21.690370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:21.601 [2024-11-25 12:24:21.690423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:21.601 [2024-11-25 12:24:21.690433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:21.601 [2024-11-25 12:24:21.690440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:21.601 [2024-11-25 12:24:21.690448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:21.601 [2024-11-25 12:24:21.690532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:21.601 [2024-11-25 12:24:21.690542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:21.601 [2024-11-25 12:24:21.690550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:21.601 [2024-11-25 12:24:21.690556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:21.601 [2024-11-25 12:24:21.690585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:21.601 [2024-11-25 12:24:21.690596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:26:21.601 [2024-11-25 12:24:21.690604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:21.601 [2024-11-25 12:24:21.690611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:21.601 [2024-11-25 12:24:21.690644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:21.601 [2024-11-25 12:24:21.690652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:21.601 [2024-11-25 12:24:21.690660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:21.601 [2024-11-25 12:24:21.690667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:21.601 [2024-11-25 12:24:21.690710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:21.601 [2024-11-25 12:24:21.690720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:21.601 [2024-11-25 12:24:21.690727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:21.601 [2024-11-25 12:24:21.690735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:21.601 [2024-11-25 12:24:21.690843] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7410.491 ms, result 0 00:26:24.899 12:24:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:26:24.899 12:24:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:26:24.899 12:24:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:26:24.899 12:24:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:26:24.899 12:24:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:24.899 12:24:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=80688 00:26:24.899 12:24:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:26:24.899 12:24:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 80688 00:26:24.899 12:24:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 80688 ']' 00:26:24.899 12:24:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:24.899 12:24:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:24.899 12:24:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:24.899 12:24:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:24.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:24.899 12:24:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:24.899 12:24:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:24.899 [2024-11-25 12:24:25.597694] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:26:24.899 [2024-11-25 12:24:25.598114] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80688 ] 00:26:24.899 [2024-11-25 12:24:25.759584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:24.899 [2024-11-25 12:24:25.854517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:25.466 [2024-11-25 12:24:26.537511] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:25.466 [2024-11-25 12:24:26.537733] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:25.726 [2024-11-25 12:24:26.681807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.726 [2024-11-25 12:24:26.681978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:26:25.726 [2024-11-25 12:24:26.682047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:25.726 [2024-11-25 12:24:26.682073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.726 [2024-11-25 12:24:26.682141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.726 [2024-11-25 12:24:26.682166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:25.726 [2024-11-25 12:24:26.682186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:26:25.726 [2024-11-25 12:24:26.682205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.726 [2024-11-25 12:24:26.682243] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:26:25.726 [2024-11-25 12:24:26.683039] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:26:25.726 [2024-11-25 12:24:26.683144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.726 [2024-11-25 12:24:26.683195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:25.726 [2024-11-25 12:24:26.683218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.910 ms 00:26:25.726 [2024-11-25 12:24:26.683236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.726 [2024-11-25 12:24:26.684589] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:26:25.726 [2024-11-25 12:24:26.697117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.726 [2024-11-25 12:24:26.697245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:26:25.726 [2024-11-25 12:24:26.697312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.529 ms 00:26:25.726 [2024-11-25 12:24:26.697336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.726 [2024-11-25 12:24:26.697414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.726 [2024-11-25 12:24:26.697454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:26:25.726 [2024-11-25 12:24:26.697479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:26:25.726 [2024-11-25 12:24:26.697498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.726 [2024-11-25 12:24:26.702237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.726 [2024-11-25 12:24:26.702354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:25.727 [2024-11-25 12:24:26.702409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.667 ms 00:26:25.727 [2024-11-25 12:24:26.702432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.727 [2024-11-25 12:24:26.702502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.727 [2024-11-25 12:24:26.702808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:25.727 [2024-11-25 12:24:26.702884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:26:25.727 [2024-11-25 12:24:26.702976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.727 [2024-11-25 12:24:26.703074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.727 [2024-11-25 12:24:26.703189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:26:25.727 [2024-11-25 12:24:26.703220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:26:25.727 [2024-11-25 12:24:26.703239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.727 [2024-11-25 12:24:26.703279] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:26:25.727 [2024-11-25 12:24:26.706615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.727 [2024-11-25 12:24:26.706714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:25.727 [2024-11-25 12:24:26.706767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.343 ms 00:26:25.727 [2024-11-25 12:24:26.706794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.727 [2024-11-25 12:24:26.706835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.727 [2024-11-25 12:24:26.706933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:26:25.727 [2024-11-25 12:24:26.706973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:25.727 [2024-11-25 12:24:26.706992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.727 [2024-11-25 12:24:26.707039] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:26:25.727 [2024-11-25 12:24:26.707167] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:26:25.727 [2024-11-25 12:24:26.707228] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:26:25.727 [2024-11-25 12:24:26.707296] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:26:25.727 [2024-11-25 12:24:26.707424] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:26:25.727 [2024-11-25 12:24:26.707506] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:26:25.727 [2024-11-25 12:24:26.707541] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:26:25.727 [2024-11-25 12:24:26.707571] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:26:25.727 [2024-11-25 12:24:26.707601] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:26:25.727 [2024-11-25 12:24:26.707715] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:26:25.727 [2024-11-25 12:24:26.707735] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:26:25.727 [2024-11-25 12:24:26.707752] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:26:25.727 [2024-11-25 12:24:26.707770] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:26:25.727 [2024-11-25 12:24:26.707790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.727 [2024-11-25 12:24:26.707809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:26:25.727 [2024-11-25 12:24:26.707828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.753 ms 00:26:25.727 [2024-11-25 12:24:26.707923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.727 [2024-11-25 12:24:26.708045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.727 [2024-11-25 12:24:26.708145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:26:25.727 [2024-11-25 12:24:26.708168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:26:25.727 [2024-11-25 12:24:26.708180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.727 [2024-11-25 12:24:26.708283] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:26:25.727 [2024-11-25 12:24:26.708294] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:26:25.727 [2024-11-25 12:24:26.708302] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:25.727 [2024-11-25 12:24:26.708309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:25.727 [2024-11-25 12:24:26.708317] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:26:25.727 [2024-11-25 12:24:26.708324] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:26:25.727 [2024-11-25 12:24:26.708331] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:26:25.727 [2024-11-25 12:24:26.708338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:26:25.727 [2024-11-25 12:24:26.708344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:26:25.727 [2024-11-25 12:24:26.708351] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:25.727 [2024-11-25 12:24:26.708360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:26:25.727 [2024-11-25 12:24:26.708370] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:26:25.727 [2024-11-25 12:24:26.708381] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:25.727 [2024-11-25 12:24:26.708392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:26:25.727 [2024-11-25 12:24:26.708402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:26:25.727 [2024-11-25 12:24:26.708408] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:25.727 [2024-11-25 12:24:26.708415] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:26:25.727 [2024-11-25 12:24:26.708421] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:26:25.727 [2024-11-25 12:24:26.708427] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:25.727 [2024-11-25 12:24:26.708434] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:26:25.727 [2024-11-25 12:24:26.708440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:26:25.727 [2024-11-25 12:24:26.708446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:25.727 [2024-11-25 12:24:26.708452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:26:25.727 [2024-11-25 12:24:26.708458] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:26:25.727 [2024-11-25 12:24:26.708465] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:25.727 [2024-11-25 12:24:26.708478] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:26:25.727 [2024-11-25 12:24:26.708484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:26:25.727 [2024-11-25 12:24:26.708490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:25.727 [2024-11-25 12:24:26.708496] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:26:25.727 [2024-11-25 12:24:26.708502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:26:25.727 [2024-11-25 12:24:26.708509] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:25.727 [2024-11-25 12:24:26.708515] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:26:25.727 [2024-11-25 12:24:26.708521] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:26:25.727 [2024-11-25 12:24:26.708527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:25.727 [2024-11-25 12:24:26.708534] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:26:25.727 [2024-11-25 12:24:26.708541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:26:25.727 [2024-11-25 12:24:26.708547] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:25.727 [2024-11-25 12:24:26.708553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:26:25.727 [2024-11-25 12:24:26.708561] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:26:25.727 [2024-11-25 12:24:26.708568] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:25.727 [2024-11-25 12:24:26.708575] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:26:25.727 [2024-11-25 12:24:26.708581] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:26:25.727 [2024-11-25 12:24:26.708588] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:25.727 [2024-11-25 12:24:26.708594] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:26:25.727 [2024-11-25 12:24:26.708601] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:26:25.727 [2024-11-25 12:24:26.708608] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:25.727 [2024-11-25 12:24:26.708615] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:25.727 [2024-11-25 12:24:26.708625] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:26:25.727 [2024-11-25 12:24:26.708631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:26:25.727 [2024-11-25 12:24:26.708637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:26:25.727 [2024-11-25 12:24:26.708644] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:26:25.727 [2024-11-25 12:24:26.708650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:26:25.727 [2024-11-25 12:24:26.708656] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:26:25.727 [2024-11-25 12:24:26.708665] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:26:25.727 [2024-11-25 12:24:26.708674] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:25.727 [2024-11-25 12:24:26.708682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:26:25.727 [2024-11-25 12:24:26.708689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:26:25.727 [2024-11-25 12:24:26.708696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:26:25.727 [2024-11-25 12:24:26.708703] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:26:25.727 [2024-11-25 12:24:26.708710] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:26:25.728 [2024-11-25 12:24:26.708717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:26:25.728 [2024-11-25 12:24:26.708724] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:26:25.728 [2024-11-25 12:24:26.708731] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:26:25.728 [2024-11-25 12:24:26.708737] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:26:25.728 [2024-11-25 12:24:26.708744] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:26:25.728 [2024-11-25 12:24:26.708751] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:26:25.728 [2024-11-25 12:24:26.708759] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:26:25.728 [2024-11-25 12:24:26.708765] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:26:25.728 [2024-11-25 12:24:26.708772] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:26:25.728 [2024-11-25 12:24:26.708779] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:26:25.728 [2024-11-25 12:24:26.708790] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:25.728 [2024-11-25 12:24:26.708797] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:25.728 [2024-11-25 12:24:26.708804] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:26:25.728 [2024-11-25 12:24:26.708811] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:26:25.728 [2024-11-25 12:24:26.708818] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:26:25.728 [2024-11-25 12:24:26.708826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.728 [2024-11-25 12:24:26.708833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:26:25.728 [2024-11-25 12:24:26.708840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.611 ms 00:26:25.728 [2024-11-25 12:24:26.708846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.728 [2024-11-25 12:24:26.708887] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:26:25.728 [2024-11-25 12:24:26.708896] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:26:27.628 [2024-11-25 12:24:28.661782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.628 [2024-11-25 12:24:28.661840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:26:27.628 [2024-11-25 12:24:28.661854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1952.887 ms 00:26:27.628 [2024-11-25 12:24:28.661863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.628 [2024-11-25 12:24:28.686489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.628 [2024-11-25 12:24:28.686533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:27.628 [2024-11-25 12:24:28.686546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.412 ms 00:26:27.628 [2024-11-25 12:24:28.686555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.628 [2024-11-25 12:24:28.686635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.628 [2024-11-25 12:24:28.686650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:26:27.628 [2024-11-25 12:24:28.686658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:26:27.628 [2024-11-25 12:24:28.686666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.886 [2024-11-25 12:24:28.716657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.886 [2024-11-25 12:24:28.716788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:27.886 [2024-11-25 12:24:28.716805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.942 ms 00:26:27.886 [2024-11-25 12:24:28.716816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.886 [2024-11-25 12:24:28.716845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.886 [2024-11-25 12:24:28.716853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:27.886 [2024-11-25 12:24:28.716861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:27.886 [2024-11-25 12:24:28.716868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.886 [2024-11-25 12:24:28.717196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.886 [2024-11-25 12:24:28.717212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:27.886 [2024-11-25 12:24:28.717220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.280 ms 00:26:27.886 [2024-11-25 12:24:28.717228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.886 [2024-11-25 12:24:28.717269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.886 [2024-11-25 12:24:28.717278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:27.886 [2024-11-25 12:24:28.717286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:26:27.886 [2024-11-25 12:24:28.717293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.886 [2024-11-25 12:24:28.731103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.886 [2024-11-25 12:24:28.731212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:27.886 [2024-11-25 12:24:28.731226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.790 ms 00:26:27.886 [2024-11-25 12:24:28.731234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.887 [2024-11-25 12:24:28.743382] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:26:27.887 [2024-11-25 12:24:28.743415] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:26:27.887 [2024-11-25 12:24:28.743428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.887 [2024-11-25 12:24:28.743435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:26:27.887 [2024-11-25 12:24:28.743443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.100 ms 00:26:27.887 [2024-11-25 12:24:28.743451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.887 [2024-11-25 12:24:28.757044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.887 [2024-11-25 12:24:28.757077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:26:27.887 [2024-11-25 12:24:28.757088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.554 ms 00:26:27.887 [2024-11-25 12:24:28.757096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.887 [2024-11-25 12:24:28.768260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.887 [2024-11-25 12:24:28.768287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:26:27.887 [2024-11-25 12:24:28.768297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.128 ms 00:26:27.887 [2024-11-25 12:24:28.768304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.887 [2024-11-25 12:24:28.779377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.887 [2024-11-25 12:24:28.779405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:26:27.887 [2024-11-25 12:24:28.779415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.041 ms 00:26:27.887 [2024-11-25 12:24:28.779422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.887 [2024-11-25 12:24:28.780034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.887 [2024-11-25 12:24:28.780053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:26:27.887 [2024-11-25 12:24:28.780063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.529 ms 00:26:27.887 [2024-11-25 12:24:28.780070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.887 [2024-11-25 12:24:28.856695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.887 [2024-11-25 12:24:28.856865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:26:27.887 [2024-11-25 12:24:28.856885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 76.606 ms 00:26:27.887 [2024-11-25 12:24:28.856893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.887 [2024-11-25 12:24:28.867287] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:26:27.887 [2024-11-25 12:24:28.867999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.887 [2024-11-25 12:24:28.868024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:26:27.887 [2024-11-25 12:24:28.868035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.065 ms 00:26:27.887 [2024-11-25 12:24:28.868042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.887 [2024-11-25 12:24:28.868113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.887 [2024-11-25 12:24:28.868126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:26:27.887 [2024-11-25 12:24:28.868135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:26:27.887 [2024-11-25 12:24:28.868143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.887 [2024-11-25 12:24:28.868195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.887 [2024-11-25 12:24:28.868205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:26:27.887 [2024-11-25 12:24:28.868213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:26:27.887 [2024-11-25 12:24:28.868220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.887 [2024-11-25 12:24:28.868240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.887 [2024-11-25 12:24:28.868248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:26:27.887 [2024-11-25 12:24:28.868256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:27.887 [2024-11-25 12:24:28.868266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.887 [2024-11-25 12:24:28.868298] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:26:27.887 [2024-11-25 12:24:28.868307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.887 [2024-11-25 12:24:28.868315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:26:27.887 [2024-11-25 12:24:28.868323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:26:27.887 [2024-11-25 12:24:28.868331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.887 [2024-11-25 12:24:28.891138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.887 [2024-11-25 12:24:28.891176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:26:27.887 [2024-11-25 12:24:28.891188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.789 ms 00:26:27.887 [2024-11-25 12:24:28.891196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.887 [2024-11-25 12:24:28.891265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.887 [2024-11-25 12:24:28.891274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:26:27.887 [2024-11-25 12:24:28.891282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:26:27.887 [2024-11-25 12:24:28.891289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.887 [2024-11-25 12:24:28.892529] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2210.321 ms, result 0 00:26:27.887 [2024-11-25 12:24:28.907494] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:27.887 [2024-11-25 12:24:28.923462] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:26:27.887 [2024-11-25 12:24:28.931580] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:28.145 12:24:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:28.145 12:24:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:26:28.145 12:24:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:28.145 12:24:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:26:28.145 12:24:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:26:28.145 [2024-11-25 12:24:29.151689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:28.145 [2024-11-25 12:24:29.151735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:28.145 [2024-11-25 12:24:29.151747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:26:28.145 [2024-11-25 12:24:29.151758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:28.145 [2024-11-25 12:24:29.151780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:28.145 [2024-11-25 12:24:29.151788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:28.145 [2024-11-25 12:24:29.151796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:28.145 [2024-11-25 12:24:29.151804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:28.145 [2024-11-25 12:24:29.151822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:28.145 [2024-11-25 12:24:29.151831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:28.145 [2024-11-25 12:24:29.151839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:28.145 [2024-11-25 12:24:29.151846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:28.145 [2024-11-25 12:24:29.151901] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.201 ms, result 0 00:26:28.145 true 00:26:28.145 12:24:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:28.403 { 00:26:28.403 "name": "ftl", 00:26:28.403 "properties": [ 00:26:28.403 { 00:26:28.403 "name": "superblock_version", 00:26:28.403 "value": 5, 00:26:28.403 "read-only": true 00:26:28.403 }, 00:26:28.403 { 00:26:28.403 "name": "base_device", 00:26:28.403 "bands": [ 00:26:28.403 { 00:26:28.403 "id": 0, 00:26:28.403 "state": "CLOSED", 00:26:28.403 "validity": 1.0 00:26:28.403 }, 00:26:28.403 { 00:26:28.403 "id": 1, 00:26:28.403 "state": "CLOSED", 00:26:28.403 "validity": 1.0 00:26:28.403 }, 00:26:28.403 { 00:26:28.403 "id": 2, 00:26:28.403 "state": "CLOSED", 00:26:28.403 "validity": 0.007843137254901933 00:26:28.403 }, 00:26:28.403 { 00:26:28.403 "id": 3, 00:26:28.403 "state": "FREE", 00:26:28.403 "validity": 0.0 00:26:28.403 }, 00:26:28.403 { 00:26:28.403 "id": 4, 00:26:28.403 "state": "FREE", 00:26:28.403 "validity": 0.0 00:26:28.403 }, 00:26:28.403 { 00:26:28.403 "id": 5, 00:26:28.403 "state": "FREE", 00:26:28.403 "validity": 0.0 00:26:28.403 }, 00:26:28.403 { 00:26:28.403 "id": 6, 00:26:28.403 "state": "FREE", 00:26:28.403 "validity": 0.0 00:26:28.403 }, 00:26:28.403 { 00:26:28.403 "id": 7, 00:26:28.403 "state": "FREE", 00:26:28.403 "validity": 0.0 00:26:28.403 }, 00:26:28.403 { 00:26:28.403 "id": 8, 00:26:28.403 "state": "FREE", 00:26:28.403 "validity": 0.0 00:26:28.403 }, 00:26:28.403 { 00:26:28.403 "id": 9, 00:26:28.403 "state": "FREE", 00:26:28.403 "validity": 0.0 00:26:28.403 }, 00:26:28.403 { 00:26:28.403 "id": 10, 00:26:28.403 "state": "FREE", 00:26:28.403 "validity": 0.0 00:26:28.403 }, 00:26:28.403 { 00:26:28.403 "id": 11, 00:26:28.403 "state": "FREE", 00:26:28.403 "validity": 0.0 00:26:28.403 }, 00:26:28.403 { 00:26:28.403 "id": 12, 00:26:28.403 "state": "FREE", 00:26:28.403 "validity": 0.0 00:26:28.403 }, 00:26:28.403 { 00:26:28.403 "id": 13, 00:26:28.403 "state": "FREE", 00:26:28.403 "validity": 0.0 00:26:28.403 }, 00:26:28.403 { 00:26:28.403 "id": 14, 00:26:28.403 "state": "FREE", 00:26:28.403 "validity": 0.0 00:26:28.403 }, 00:26:28.403 { 00:26:28.403 "id": 15, 00:26:28.403 "state": "FREE", 00:26:28.403 "validity": 0.0 00:26:28.403 }, 00:26:28.403 { 00:26:28.403 "id": 16, 00:26:28.403 "state": "FREE", 00:26:28.403 "validity": 0.0 00:26:28.403 }, 00:26:28.403 { 00:26:28.403 "id": 17, 00:26:28.403 "state": "FREE", 00:26:28.403 "validity": 0.0 00:26:28.403 } 00:26:28.403 ], 00:26:28.403 "read-only": true 00:26:28.403 }, 00:26:28.403 { 00:26:28.403 "name": "cache_device", 00:26:28.403 "type": "bdev", 00:26:28.403 "chunks": [ 00:26:28.403 { 00:26:28.403 "id": 0, 00:26:28.403 "state": "INACTIVE", 00:26:28.403 "utilization": 0.0 00:26:28.403 }, 00:26:28.403 { 00:26:28.403 "id": 1, 00:26:28.403 "state": "OPEN", 00:26:28.403 "utilization": 0.0 00:26:28.403 }, 00:26:28.403 { 00:26:28.403 "id": 2, 00:26:28.403 "state": "OPEN", 00:26:28.403 "utilization": 0.0 00:26:28.403 }, 00:26:28.403 { 00:26:28.403 "id": 3, 00:26:28.403 "state": "FREE", 00:26:28.403 "utilization": 0.0 00:26:28.403 }, 00:26:28.403 { 00:26:28.403 "id": 4, 00:26:28.403 "state": "FREE", 00:26:28.403 "utilization": 0.0 00:26:28.403 } 00:26:28.403 ], 00:26:28.403 "read-only": true 00:26:28.403 }, 00:26:28.403 { 00:26:28.403 "name": "verbose_mode", 00:26:28.403 "value": true, 00:26:28.403 "unit": "", 00:26:28.403 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:26:28.403 }, 00:26:28.403 { 00:26:28.403 "name": "prep_upgrade_on_shutdown", 00:26:28.403 "value": false, 00:26:28.403 "unit": "", 00:26:28.403 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:26:28.403 } 00:26:28.403 ] 00:26:28.403 } 00:26:28.403 12:24:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:26:28.403 12:24:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:26:28.403 12:24:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:28.662 12:24:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:26:28.662 12:24:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:26:28.662 12:24:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:26:28.662 12:24:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:28.662 12:24:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:26:28.920 Validate MD5 checksum, iteration 1 00:26:28.920 12:24:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:26:28.920 12:24:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:26:28.920 12:24:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:26:28.920 12:24:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:26:28.920 12:24:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:26:28.920 12:24:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:28.920 12:24:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:26:28.920 12:24:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:28.920 12:24:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:28.921 12:24:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:28.921 12:24:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:28.921 12:24:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:28.921 12:24:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:28.921 [2024-11-25 12:24:29.841817] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:26:28.921 [2024-11-25 12:24:29.842067] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80749 ] 00:26:29.179 [2024-11-25 12:24:30.003411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:29.179 [2024-11-25 12:24:30.103263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:30.601  [2024-11-25T12:24:32.248Z] Copying: 759/1024 [MB] (759 MBps) [2024-11-25T12:24:33.180Z] Copying: 1024/1024 [MB] (average 745 MBps) 00:26:32.100 00:26:32.100 12:24:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:26:32.100 12:24:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:34.000 12:24:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:26:34.000 12:24:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=f30373ee9cf802757e4b11f0a8ff4937 00:26:34.000 12:24:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ f30373ee9cf802757e4b11f0a8ff4937 != \f\3\0\3\7\3\e\e\9\c\f\8\0\2\7\5\7\e\4\b\1\1\f\0\a\8\f\f\4\9\3\7 ]] 00:26:34.000 12:24:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:26:34.000 12:24:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:34.000 12:24:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:26:34.000 Validate MD5 checksum, iteration 2 00:26:34.000 12:24:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:34.000 12:24:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:34.000 12:24:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:34.000 12:24:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:34.000 12:24:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:34.000 12:24:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:34.000 [2024-11-25 12:24:34.784313] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:26:34.000 [2024-11-25 12:24:34.784403] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80806 ] 00:26:34.000 [2024-11-25 12:24:34.937127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:34.000 [2024-11-25 12:24:35.036552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:35.901  [2024-11-25T12:24:36.981Z] Copying: 753/1024 [MB] (753 MBps) [2024-11-25T12:24:45.091Z] Copying: 1024/1024 [MB] (average 724 MBps) 00:26:44.011 00:26:44.269 12:24:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:26:44.269 12:24:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:46.190 12:24:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:26:46.190 12:24:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=7a9d8e37983de03a59feee13d6fcefd5 00:26:46.190 12:24:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 7a9d8e37983de03a59feee13d6fcefd5 != \7\a\9\d\8\e\3\7\9\8\3\d\e\0\3\a\5\9\f\e\e\e\1\3\d\6\f\c\e\f\d\5 ]] 00:26:46.190 12:24:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:26:46.190 12:24:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:46.190 12:24:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:26:46.190 12:24:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 80688 ]] 00:26:46.190 12:24:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 80688 00:26:46.190 12:24:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:26:46.190 12:24:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:26:46.190 12:24:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:26:46.190 12:24:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:26:46.190 12:24:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:46.190 12:24:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=80946 00:26:46.190 12:24:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:46.190 12:24:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:26:46.190 12:24:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 80946 00:26:46.190 12:24:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 80946 ']' 00:26:46.190 12:24:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:46.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:46.190 12:24:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:46.190 12:24:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:46.190 12:24:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:46.190 12:24:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:46.448 [2024-11-25 12:24:47.309626] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:26:46.448 [2024-11-25 12:24:47.309742] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80946 ] 00:26:46.448 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 80688 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:26:46.448 [2024-11-25 12:24:47.459183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:46.707 [2024-11-25 12:24:47.535681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:47.275 [2024-11-25 12:24:48.107945] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:47.275 [2024-11-25 12:24:48.108010] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:47.275 [2024-11-25 12:24:48.251248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.275 [2024-11-25 12:24:48.251290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:26:47.275 [2024-11-25 12:24:48.251301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:47.275 [2024-11-25 12:24:48.251308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.275 [2024-11-25 12:24:48.251353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.275 [2024-11-25 12:24:48.251361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:47.275 [2024-11-25 12:24:48.251368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:26:47.275 [2024-11-25 12:24:48.251374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.275 [2024-11-25 12:24:48.251393] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:26:47.275 [2024-11-25 12:24:48.252014] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:26:47.275 [2024-11-25 12:24:48.252027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.275 [2024-11-25 12:24:48.252033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:47.275 [2024-11-25 12:24:48.252040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.641 ms 00:26:47.275 [2024-11-25 12:24:48.252047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.275 [2024-11-25 12:24:48.252350] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:26:47.275 [2024-11-25 12:24:48.264984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.275 [2024-11-25 12:24:48.265124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:26:47.275 [2024-11-25 12:24:48.265141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.634 ms 00:26:47.275 [2024-11-25 12:24:48.265147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.275 [2024-11-25 12:24:48.272232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.275 [2024-11-25 12:24:48.272335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:26:47.275 [2024-11-25 12:24:48.272351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:26:47.275 [2024-11-25 12:24:48.272357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.275 [2024-11-25 12:24:48.272630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.275 [2024-11-25 12:24:48.272640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:47.275 [2024-11-25 12:24:48.272647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.201 ms 00:26:47.275 [2024-11-25 12:24:48.272653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.275 [2024-11-25 12:24:48.272693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.275 [2024-11-25 12:24:48.272702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:47.275 [2024-11-25 12:24:48.272709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:26:47.275 [2024-11-25 12:24:48.272715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.275 [2024-11-25 12:24:48.272737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.275 [2024-11-25 12:24:48.272744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:26:47.275 [2024-11-25 12:24:48.272750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:26:47.275 [2024-11-25 12:24:48.272757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.275 [2024-11-25 12:24:48.272775] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:26:47.275 [2024-11-25 12:24:48.275199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.275 [2024-11-25 12:24:48.275223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:47.275 [2024-11-25 12:24:48.275230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.429 ms 00:26:47.275 [2024-11-25 12:24:48.275236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.275 [2024-11-25 12:24:48.275262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.275 [2024-11-25 12:24:48.275269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:26:47.275 [2024-11-25 12:24:48.275275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:47.275 [2024-11-25 12:24:48.275281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.275 [2024-11-25 12:24:48.275298] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:26:47.275 [2024-11-25 12:24:48.275313] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:26:47.276 [2024-11-25 12:24:48.275341] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:26:47.276 [2024-11-25 12:24:48.275356] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:26:47.276 [2024-11-25 12:24:48.275439] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:26:47.276 [2024-11-25 12:24:48.275447] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:26:47.276 [2024-11-25 12:24:48.275455] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:26:47.276 [2024-11-25 12:24:48.275463] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:26:47.276 [2024-11-25 12:24:48.275470] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:26:47.276 [2024-11-25 12:24:48.275476] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:26:47.276 [2024-11-25 12:24:48.275482] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:26:47.276 [2024-11-25 12:24:48.275488] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:26:47.276 [2024-11-25 12:24:48.275494] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:26:47.276 [2024-11-25 12:24:48.275500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.276 [2024-11-25 12:24:48.275508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:26:47.276 [2024-11-25 12:24:48.275514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.204 ms 00:26:47.276 [2024-11-25 12:24:48.275520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.276 [2024-11-25 12:24:48.275589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.276 [2024-11-25 12:24:48.275595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:26:47.276 [2024-11-25 12:24:48.275601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.056 ms 00:26:47.276 [2024-11-25 12:24:48.275608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.276 [2024-11-25 12:24:48.275688] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:26:47.276 [2024-11-25 12:24:48.275696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:26:47.276 [2024-11-25 12:24:48.275704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:47.276 [2024-11-25 12:24:48.275710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:47.276 [2024-11-25 12:24:48.275717] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:26:47.276 [2024-11-25 12:24:48.275722] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:26:47.276 [2024-11-25 12:24:48.275728] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:26:47.276 [2024-11-25 12:24:48.275733] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:26:47.276 [2024-11-25 12:24:48.275739] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:26:47.276 [2024-11-25 12:24:48.275745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:47.276 [2024-11-25 12:24:48.275750] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:26:47.276 [2024-11-25 12:24:48.275757] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:26:47.276 [2024-11-25 12:24:48.275763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:47.276 [2024-11-25 12:24:48.275768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:26:47.276 [2024-11-25 12:24:48.275774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:26:47.276 [2024-11-25 12:24:48.275779] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:47.276 [2024-11-25 12:24:48.275785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:26:47.276 [2024-11-25 12:24:48.275790] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:26:47.276 [2024-11-25 12:24:48.275795] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:47.276 [2024-11-25 12:24:48.275800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:26:47.276 [2024-11-25 12:24:48.275806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:26:47.276 [2024-11-25 12:24:48.275811] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:47.276 [2024-11-25 12:24:48.275817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:26:47.276 [2024-11-25 12:24:48.275827] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:26:47.276 [2024-11-25 12:24:48.275832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:47.276 [2024-11-25 12:24:48.275838] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:26:47.276 [2024-11-25 12:24:48.275843] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:26:47.276 [2024-11-25 12:24:48.275848] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:47.276 [2024-11-25 12:24:48.275853] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:26:47.276 [2024-11-25 12:24:48.275858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:26:47.276 [2024-11-25 12:24:48.275863] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:47.276 [2024-11-25 12:24:48.275868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:26:47.276 [2024-11-25 12:24:48.275873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:26:47.276 [2024-11-25 12:24:48.275879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:47.276 [2024-11-25 12:24:48.275884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:26:47.276 [2024-11-25 12:24:48.275889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:26:47.276 [2024-11-25 12:24:48.275894] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:47.276 [2024-11-25 12:24:48.275899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:26:47.276 [2024-11-25 12:24:48.275904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:26:47.276 [2024-11-25 12:24:48.275909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:47.276 [2024-11-25 12:24:48.275914] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:26:47.276 [2024-11-25 12:24:48.275919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:26:47.276 [2024-11-25 12:24:48.275924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:47.276 [2024-11-25 12:24:48.275930] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:26:47.276 [2024-11-25 12:24:48.275937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:26:47.276 [2024-11-25 12:24:48.275944] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:47.276 [2024-11-25 12:24:48.276151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:47.276 [2024-11-25 12:24:48.276169] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:26:47.276 [2024-11-25 12:24:48.276184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:26:47.276 [2024-11-25 12:24:48.276199] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:26:47.276 [2024-11-25 12:24:48.276243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:26:47.276 [2024-11-25 12:24:48.276262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:26:47.276 [2024-11-25 12:24:48.276277] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:26:47.276 [2024-11-25 12:24:48.276293] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:26:47.276 [2024-11-25 12:24:48.276318] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:47.276 [2024-11-25 12:24:48.276378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:26:47.276 [2024-11-25 12:24:48.276419] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:26:47.276 [2024-11-25 12:24:48.276442] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:26:47.276 [2024-11-25 12:24:48.276464] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:26:47.276 [2024-11-25 12:24:48.276487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:26:47.276 [2024-11-25 12:24:48.276509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:26:47.276 [2024-11-25 12:24:48.276565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:26:47.276 [2024-11-25 12:24:48.276623] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:26:47.276 [2024-11-25 12:24:48.276647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:26:47.276 [2024-11-25 12:24:48.276669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:26:47.276 [2024-11-25 12:24:48.276691] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:26:47.276 [2024-11-25 12:24:48.276736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:26:47.276 [2024-11-25 12:24:48.276760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:26:47.276 [2024-11-25 12:24:48.276782] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:26:47.276 [2024-11-25 12:24:48.276804] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:26:47.276 [2024-11-25 12:24:48.276847] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:47.276 [2024-11-25 12:24:48.276955] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:47.276 [2024-11-25 12:24:48.276980] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:26:47.276 [2024-11-25 12:24:48.277003] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:26:47.276 [2024-11-25 12:24:48.277010] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:26:47.276 [2024-11-25 12:24:48.277018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.276 [2024-11-25 12:24:48.277028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:26:47.276 [2024-11-25 12:24:48.277035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.386 ms 00:26:47.276 [2024-11-25 12:24:48.277041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.276 [2024-11-25 12:24:48.296845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.277 [2024-11-25 12:24:48.296879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:47.277 [2024-11-25 12:24:48.296888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.743 ms 00:26:47.277 [2024-11-25 12:24:48.296894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.277 [2024-11-25 12:24:48.296930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.277 [2024-11-25 12:24:48.296938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:26:47.277 [2024-11-25 12:24:48.296944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:26:47.277 [2024-11-25 12:24:48.296962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.277 [2024-11-25 12:24:48.321631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.277 [2024-11-25 12:24:48.321667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:47.277 [2024-11-25 12:24:48.321676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.617 ms 00:26:47.277 [2024-11-25 12:24:48.321683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.277 [2024-11-25 12:24:48.321712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.277 [2024-11-25 12:24:48.321719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:47.277 [2024-11-25 12:24:48.321727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:47.277 [2024-11-25 12:24:48.321733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.277 [2024-11-25 12:24:48.321819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.277 [2024-11-25 12:24:48.321828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:47.277 [2024-11-25 12:24:48.321835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:26:47.277 [2024-11-25 12:24:48.321841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.277 [2024-11-25 12:24:48.321871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.277 [2024-11-25 12:24:48.321879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:47.277 [2024-11-25 12:24:48.321886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:26:47.277 [2024-11-25 12:24:48.321892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.277 [2024-11-25 12:24:48.333600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.277 [2024-11-25 12:24:48.333631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:47.277 [2024-11-25 12:24:48.333640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.692 ms 00:26:47.277 [2024-11-25 12:24:48.333646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.277 [2024-11-25 12:24:48.333737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.277 [2024-11-25 12:24:48.333746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:26:47.277 [2024-11-25 12:24:48.333753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:47.277 [2024-11-25 12:24:48.333759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.535 [2024-11-25 12:24:48.361189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.535 [2024-11-25 12:24:48.361241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:26:47.535 [2024-11-25 12:24:48.361258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.413 ms 00:26:47.535 [2024-11-25 12:24:48.361267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.535 [2024-11-25 12:24:48.370378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.535 [2024-11-25 12:24:48.370419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:26:47.535 [2024-11-25 12:24:48.370432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.422 ms 00:26:47.535 [2024-11-25 12:24:48.370438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.535 [2024-11-25 12:24:48.414651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.535 [2024-11-25 12:24:48.414702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:26:47.535 [2024-11-25 12:24:48.414717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 44.162 ms 00:26:47.535 [2024-11-25 12:24:48.414724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.535 [2024-11-25 12:24:48.414840] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:26:47.535 [2024-11-25 12:24:48.414915] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:26:47.535 [2024-11-25 12:24:48.415005] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:26:47.536 [2024-11-25 12:24:48.415078] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:26:47.536 [2024-11-25 12:24:48.415087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.536 [2024-11-25 12:24:48.415093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:26:47.536 [2024-11-25 12:24:48.415100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.319 ms 00:26:47.536 [2024-11-25 12:24:48.415106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.536 [2024-11-25 12:24:48.415159] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:26:47.536 [2024-11-25 12:24:48.415168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.536 [2024-11-25 12:24:48.415177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:26:47.536 [2024-11-25 12:24:48.415183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:26:47.536 [2024-11-25 12:24:48.415189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.536 [2024-11-25 12:24:48.426679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.536 [2024-11-25 12:24:48.426717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:26:47.536 [2024-11-25 12:24:48.426727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.472 ms 00:26:47.536 [2024-11-25 12:24:48.426734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.536 [2024-11-25 12:24:48.433277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.536 [2024-11-25 12:24:48.433305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:26:47.536 [2024-11-25 12:24:48.433314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:26:47.536 [2024-11-25 12:24:48.433320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.536 [2024-11-25 12:24:48.433401] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:26:47.536 [2024-11-25 12:24:48.433522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.536 [2024-11-25 12:24:48.433535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:26:47.536 [2024-11-25 12:24:48.433543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.122 ms 00:26:47.536 [2024-11-25 12:24:48.433548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.794 [2024-11-25 12:24:48.859688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.794 [2024-11-25 12:24:48.859898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:26:47.794 [2024-11-25 12:24:48.859920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 425.469 ms 00:26:47.794 [2024-11-25 12:24:48.859929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.794 [2024-11-25 12:24:48.863592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.794 [2024-11-25 12:24:48.863626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:26:47.794 [2024-11-25 12:24:48.863637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.694 ms 00:26:47.794 [2024-11-25 12:24:48.863644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.794 [2024-11-25 12:24:48.863917] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:26:47.794 [2024-11-25 12:24:48.863941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.794 [2024-11-25 12:24:48.863961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:26:47.794 [2024-11-25 12:24:48.863970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.266 ms 00:26:47.794 [2024-11-25 12:24:48.863978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.794 [2024-11-25 12:24:48.864056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.794 [2024-11-25 12:24:48.864066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:26:47.794 [2024-11-25 12:24:48.864075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:47.794 [2024-11-25 12:24:48.864082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:47.794 [2024-11-25 12:24:48.864120] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 430.716 ms, result 0 00:26:47.794 [2024-11-25 12:24:48.864155] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:26:47.794 [2024-11-25 12:24:48.864237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:47.794 [2024-11-25 12:24:48.864247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:26:47.794 [2024-11-25 12:24:48.864254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.084 ms 00:26:47.794 [2024-11-25 12:24:48.864261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:48.360 [2024-11-25 12:24:49.285784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:48.360 [2024-11-25 12:24:49.285837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:26:48.360 [2024-11-25 12:24:49.285851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 420.641 ms 00:26:48.360 [2024-11-25 12:24:49.285859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:48.360 [2024-11-25 12:24:49.289484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:48.360 [2024-11-25 12:24:49.289515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:26:48.360 [2024-11-25 12:24:49.289525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.705 ms 00:26:48.360 [2024-11-25 12:24:49.289532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:48.360 [2024-11-25 12:24:49.289835] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:26:48.360 [2024-11-25 12:24:49.289859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:48.360 [2024-11-25 12:24:49.289867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:26:48.360 [2024-11-25 12:24:49.289875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.311 ms 00:26:48.360 [2024-11-25 12:24:49.289881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:48.360 [2024-11-25 12:24:49.289992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:48.360 [2024-11-25 12:24:49.290002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:26:48.360 [2024-11-25 12:24:49.290010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:48.360 [2024-11-25 12:24:49.290017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:48.360 [2024-11-25 12:24:49.290063] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 425.902 ms, result 0 00:26:48.360 [2024-11-25 12:24:49.290102] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:48.360 [2024-11-25 12:24:49.290112] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:26:48.360 [2024-11-25 12:24:49.290120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:48.360 [2024-11-25 12:24:49.290128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:26:48.360 [2024-11-25 12:24:49.290136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 856.733 ms 00:26:48.360 [2024-11-25 12:24:49.290143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:48.360 [2024-11-25 12:24:49.290171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:48.360 [2024-11-25 12:24:49.290185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:26:48.360 [2024-11-25 12:24:49.290195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:48.360 [2024-11-25 12:24:49.290203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:48.360 [2024-11-25 12:24:49.300903] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:26:48.360 [2024-11-25 12:24:49.301031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:48.360 [2024-11-25 12:24:49.301047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:26:48.360 [2024-11-25 12:24:49.301056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.812 ms 00:26:48.360 [2024-11-25 12:24:49.301063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:48.360 [2024-11-25 12:24:49.301737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:48.360 [2024-11-25 12:24:49.301857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:26:48.360 [2024-11-25 12:24:49.301875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.607 ms 00:26:48.360 [2024-11-25 12:24:49.301883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:48.360 [2024-11-25 12:24:49.304116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:48.360 [2024-11-25 12:24:49.304132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:26:48.360 [2024-11-25 12:24:49.304142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.213 ms 00:26:48.360 [2024-11-25 12:24:49.304150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:48.360 [2024-11-25 12:24:49.304188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:48.360 [2024-11-25 12:24:49.304196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:26:48.360 [2024-11-25 12:24:49.304204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:48.360 [2024-11-25 12:24:49.304215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:48.360 [2024-11-25 12:24:49.304314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:48.360 [2024-11-25 12:24:49.304323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:26:48.360 [2024-11-25 12:24:49.304331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:26:48.361 [2024-11-25 12:24:49.304338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:48.361 [2024-11-25 12:24:49.304358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:48.361 [2024-11-25 12:24:49.304365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:26:48.361 [2024-11-25 12:24:49.304372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:48.361 [2024-11-25 12:24:49.304379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:48.361 [2024-11-25 12:24:49.304405] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:26:48.361 [2024-11-25 12:24:49.304416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:48.361 [2024-11-25 12:24:49.304423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:26:48.361 [2024-11-25 12:24:49.304430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:26:48.361 [2024-11-25 12:24:49.304438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:48.361 [2024-11-25 12:24:49.304486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:48.361 [2024-11-25 12:24:49.304495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:26:48.361 [2024-11-25 12:24:49.304503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:26:48.361 [2024-11-25 12:24:49.304509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:48.361 [2024-11-25 12:24:49.305473] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1053.696 ms, result 0 00:26:48.361 [2024-11-25 12:24:49.317927] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:48.361 [2024-11-25 12:24:49.333916] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:26:48.361 [2024-11-25 12:24:49.342050] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:48.927 12:24:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:48.927 12:24:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:26:48.927 12:24:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:48.927 12:24:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:26:48.927 12:24:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:26:48.927 12:24:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:26:48.927 12:24:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:26:48.927 12:24:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:48.927 12:24:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:26:48.927 Validate MD5 checksum, iteration 1 00:26:48.927 12:24:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:48.927 12:24:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:48.927 12:24:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:48.927 12:24:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:48.927 12:24:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:48.927 12:24:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:48.927 [2024-11-25 12:24:49.864579] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:26:48.927 [2024-11-25 12:24:49.864848] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80976 ] 00:26:49.186 [2024-11-25 12:24:50.021133] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:49.186 [2024-11-25 12:24:50.119196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:50.560  [2024-11-25T12:24:52.206Z] Copying: 701/1024 [MB] (701 MBps) [2024-11-25T12:24:53.601Z] Copying: 1024/1024 [MB] (average 691 MBps) 00:26:52.521 00:26:52.521 12:24:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:26:52.521 12:24:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:54.418 12:24:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:26:54.418 Validate MD5 checksum, iteration 2 00:26:54.418 12:24:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=f30373ee9cf802757e4b11f0a8ff4937 00:26:54.418 12:24:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ f30373ee9cf802757e4b11f0a8ff4937 != \f\3\0\3\7\3\e\e\9\c\f\8\0\2\7\5\7\e\4\b\1\1\f\0\a\8\f\f\4\9\3\7 ]] 00:26:54.418 12:24:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:26:54.418 12:24:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:54.418 12:24:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:26:54.418 12:24:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:54.418 12:24:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:54.418 12:24:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:54.418 12:24:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:54.418 12:24:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:54.418 12:24:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:54.418 [2024-11-25 12:24:55.426731] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:26:54.418 [2024-11-25 12:24:55.426842] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81043 ] 00:26:54.676 [2024-11-25 12:24:55.585114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:54.676 [2024-11-25 12:24:55.680200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:56.575  [2024-11-25T12:24:57.914Z] Copying: 706/1024 [MB] (706 MBps) [2024-11-25T12:24:58.480Z] Copying: 1024/1024 [MB] (average 683 MBps) 00:26:57.400 00:26:57.400 12:24:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:26:57.400 12:24:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:59.953 12:25:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:26:59.953 12:25:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=7a9d8e37983de03a59feee13d6fcefd5 00:26:59.953 12:25:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 7a9d8e37983de03a59feee13d6fcefd5 != \7\a\9\d\8\e\3\7\9\8\3\d\e\0\3\a\5\9\f\e\e\e\1\3\d\6\f\c\e\f\d\5 ]] 00:26:59.953 12:25:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:26:59.953 12:25:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:59.953 12:25:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:26:59.953 12:25:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:26:59.953 12:25:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:26:59.953 12:25:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:59.953 12:25:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:26:59.953 12:25:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:26:59.953 12:25:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:26:59.953 12:25:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:26:59.953 12:25:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 80946 ]] 00:26:59.953 12:25:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 80946 00:26:59.953 12:25:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 80946 ']' 00:26:59.953 12:25:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 80946 00:26:59.953 12:25:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:26:59.953 12:25:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:59.953 12:25:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80946 00:26:59.953 killing process with pid 80946 00:26:59.953 12:25:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:59.954 12:25:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:59.954 12:25:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80946' 00:26:59.954 12:25:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 80946 00:26:59.954 12:25:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 80946 00:27:00.211 [2024-11-25 12:25:01.280011] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:27:00.471 [2024-11-25 12:25:01.292243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.471 [2024-11-25 12:25:01.292277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:27:00.471 [2024-11-25 12:25:01.292287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:00.471 [2024-11-25 12:25:01.292294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.471 [2024-11-25 12:25:01.292311] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:27:00.471 [2024-11-25 12:25:01.294416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.471 [2024-11-25 12:25:01.294441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:27:00.471 [2024-11-25 12:25:01.294449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.094 ms 00:27:00.471 [2024-11-25 12:25:01.294459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.471 [2024-11-25 12:25:01.294626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.471 [2024-11-25 12:25:01.294634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:27:00.471 [2024-11-25 12:25:01.294641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.151 ms 00:27:00.471 [2024-11-25 12:25:01.294647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.471 [2024-11-25 12:25:01.295689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.471 [2024-11-25 12:25:01.295793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:27:00.471 [2024-11-25 12:25:01.295805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.031 ms 00:27:00.471 [2024-11-25 12:25:01.295811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.471 [2024-11-25 12:25:01.296720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.471 [2024-11-25 12:25:01.296734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:27:00.471 [2024-11-25 12:25:01.296742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.880 ms 00:27:00.471 [2024-11-25 12:25:01.296749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.471 [2024-11-25 12:25:01.304281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.471 [2024-11-25 12:25:01.304307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:27:00.471 [2024-11-25 12:25:01.304315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.495 ms 00:27:00.471 [2024-11-25 12:25:01.304326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.471 [2024-11-25 12:25:01.308407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.471 [2024-11-25 12:25:01.308431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:27:00.471 [2024-11-25 12:25:01.308439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.054 ms 00:27:00.471 [2024-11-25 12:25:01.308446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.471 [2024-11-25 12:25:01.308503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.471 [2024-11-25 12:25:01.308510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:27:00.471 [2024-11-25 12:25:01.308517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:27:00.471 [2024-11-25 12:25:01.308523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.471 [2024-11-25 12:25:01.315536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.471 [2024-11-25 12:25:01.315560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:27:00.471 [2024-11-25 12:25:01.315567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.998 ms 00:27:00.471 [2024-11-25 12:25:01.315572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.471 [2024-11-25 12:25:01.323066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.471 [2024-11-25 12:25:01.323089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:27:00.471 [2024-11-25 12:25:01.323096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.470 ms 00:27:00.471 [2024-11-25 12:25:01.323101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.471 [2024-11-25 12:25:01.329934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.471 [2024-11-25 12:25:01.329973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:27:00.471 [2024-11-25 12:25:01.329980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.809 ms 00:27:00.471 [2024-11-25 12:25:01.329986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.471 [2024-11-25 12:25:01.336957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.471 [2024-11-25 12:25:01.336980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:27:00.471 [2024-11-25 12:25:01.336986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.925 ms 00:27:00.471 [2024-11-25 12:25:01.336991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.471 [2024-11-25 12:25:01.337014] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:27:00.471 [2024-11-25 12:25:01.337026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:00.471 [2024-11-25 12:25:01.337033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:27:00.471 [2024-11-25 12:25:01.337039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:27:00.471 [2024-11-25 12:25:01.337045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:00.471 [2024-11-25 12:25:01.337051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:00.471 [2024-11-25 12:25:01.337057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:00.471 [2024-11-25 12:25:01.337062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:00.471 [2024-11-25 12:25:01.337068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:00.471 [2024-11-25 12:25:01.337074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:00.471 [2024-11-25 12:25:01.337080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:00.471 [2024-11-25 12:25:01.337086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:00.471 [2024-11-25 12:25:01.337091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:00.471 [2024-11-25 12:25:01.337097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:00.471 [2024-11-25 12:25:01.337102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:00.472 [2024-11-25 12:25:01.337108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:00.472 [2024-11-25 12:25:01.337113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:00.472 [2024-11-25 12:25:01.337119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:00.472 [2024-11-25 12:25:01.337124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:00.472 [2024-11-25 12:25:01.337132] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:27:00.472 [2024-11-25 12:25:01.337137] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 72c87e5a-cf6d-47f0-8042-af974f69bd8c 00:27:00.472 [2024-11-25 12:25:01.337143] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:27:00.472 [2024-11-25 12:25:01.337148] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:27:00.472 [2024-11-25 12:25:01.337153] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:27:00.472 [2024-11-25 12:25:01.337159] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:27:00.472 [2024-11-25 12:25:01.337164] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:27:00.472 [2024-11-25 12:25:01.337170] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:27:00.472 [2024-11-25 12:25:01.337175] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:27:00.472 [2024-11-25 12:25:01.337180] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:27:00.472 [2024-11-25 12:25:01.337185] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:27:00.472 [2024-11-25 12:25:01.337191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.472 [2024-11-25 12:25:01.337200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:27:00.472 [2024-11-25 12:25:01.337207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.177 ms 00:27:00.472 [2024-11-25 12:25:01.337213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.472 [2024-11-25 12:25:01.346854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.472 [2024-11-25 12:25:01.346939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:27:00.472 [2024-11-25 12:25:01.346997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.628 ms 00:27:00.472 [2024-11-25 12:25:01.347015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.472 [2024-11-25 12:25:01.347291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.472 [2024-11-25 12:25:01.347350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:27:00.472 [2024-11-25 12:25:01.347384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.252 ms 00:27:00.472 [2024-11-25 12:25:01.347401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.472 [2024-11-25 12:25:01.380434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:00.472 [2024-11-25 12:25:01.380520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:00.472 [2024-11-25 12:25:01.380558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:00.472 [2024-11-25 12:25:01.380575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.472 [2024-11-25 12:25:01.380609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:00.472 [2024-11-25 12:25:01.380625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:00.472 [2024-11-25 12:25:01.380640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:00.472 [2024-11-25 12:25:01.380654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.472 [2024-11-25 12:25:01.380726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:00.472 [2024-11-25 12:25:01.380746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:00.472 [2024-11-25 12:25:01.380763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:00.472 [2024-11-25 12:25:01.380854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.472 [2024-11-25 12:25:01.380881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:00.472 [2024-11-25 12:25:01.380970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:00.472 [2024-11-25 12:25:01.380990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:00.472 [2024-11-25 12:25:01.381023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.472 [2024-11-25 12:25:01.440929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:00.472 [2024-11-25 12:25:01.441035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:00.472 [2024-11-25 12:25:01.441079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:00.472 [2024-11-25 12:25:01.441096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.472 [2024-11-25 12:25:01.492195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:00.472 [2024-11-25 12:25:01.492336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:00.472 [2024-11-25 12:25:01.492381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:00.472 [2024-11-25 12:25:01.492401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.472 [2024-11-25 12:25:01.492470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:00.472 [2024-11-25 12:25:01.492490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:00.472 [2024-11-25 12:25:01.492506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:00.472 [2024-11-25 12:25:01.492521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.472 [2024-11-25 12:25:01.492580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:00.472 [2024-11-25 12:25:01.492636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:00.472 [2024-11-25 12:25:01.492659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:00.472 [2024-11-25 12:25:01.492679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.472 [2024-11-25 12:25:01.492765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:00.472 [2024-11-25 12:25:01.492840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:00.472 [2024-11-25 12:25:01.492848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:00.472 [2024-11-25 12:25:01.492855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.472 [2024-11-25 12:25:01.492879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:00.472 [2024-11-25 12:25:01.492886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:27:00.472 [2024-11-25 12:25:01.492893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:00.472 [2024-11-25 12:25:01.492901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.472 [2024-11-25 12:25:01.492929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:00.472 [2024-11-25 12:25:01.492936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:00.472 [2024-11-25 12:25:01.492942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:00.472 [2024-11-25 12:25:01.492969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.472 [2024-11-25 12:25:01.493004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:00.472 [2024-11-25 12:25:01.493011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:00.472 [2024-11-25 12:25:01.493020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:00.472 [2024-11-25 12:25:01.493026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.472 [2024-11-25 12:25:01.493116] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 200.850 ms, result 0 00:27:01.409 12:25:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:27:01.409 12:25:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:01.409 12:25:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:27:01.409 12:25:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:27:01.409 12:25:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:27:01.409 12:25:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:01.410 Remove shared memory files 00:27:01.410 12:25:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:27:01.410 12:25:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:01.410 12:25:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:27:01.410 12:25:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:27:01.410 12:25:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid80688 00:27:01.410 12:25:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:01.410 12:25:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:27:01.410 ************************************ 00:27:01.410 END TEST ftl_upgrade_shutdown 00:27:01.410 ************************************ 00:27:01.410 00:27:01.410 real 1m22.589s 00:27:01.410 user 1m54.132s 00:27:01.410 sys 0m18.689s 00:27:01.410 12:25:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:01.410 12:25:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:01.410 12:25:02 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:27:01.410 12:25:02 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:27:01.410 Process with pid 75239 is not found 00:27:01.410 12:25:02 ftl -- ftl/ftl.sh@14 -- # killprocess 75239 00:27:01.410 12:25:02 ftl -- common/autotest_common.sh@954 -- # '[' -z 75239 ']' 00:27:01.410 12:25:02 ftl -- common/autotest_common.sh@958 -- # kill -0 75239 00:27:01.410 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (75239) - No such process 00:27:01.410 12:25:02 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 75239 is not found' 00:27:01.410 12:25:02 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:27:01.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:01.410 12:25:02 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=81140 00:27:01.410 12:25:02 ftl -- ftl/ftl.sh@20 -- # waitforlisten 81140 00:27:01.410 12:25:02 ftl -- common/autotest_common.sh@835 -- # '[' -z 81140 ']' 00:27:01.410 12:25:02 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:01.410 12:25:02 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:01.410 12:25:02 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:01.410 12:25:02 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:01.410 12:25:02 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:01.410 12:25:02 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:01.410 [2024-11-25 12:25:02.243483] Starting SPDK v25.01-pre git sha1 393e80fcd / DPDK 24.03.0 initialization... 00:27:01.410 [2024-11-25 12:25:02.243599] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81140 ] 00:27:01.410 [2024-11-25 12:25:02.399960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:01.410 [2024-11-25 12:25:02.480646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:01.976 12:25:03 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:01.976 12:25:03 ftl -- common/autotest_common.sh@868 -- # return 0 00:27:01.976 12:25:03 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:27:02.234 nvme0n1 00:27:02.234 12:25:03 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:27:02.234 12:25:03 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:02.234 12:25:03 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:02.492 12:25:03 ftl -- ftl/common.sh@28 -- # stores=80944a95-3a60-4975-8ee7-fe87d3af0ee3 00:27:02.492 12:25:03 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:27:02.492 12:25:03 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 80944a95-3a60-4975-8ee7-fe87d3af0ee3 00:27:02.750 12:25:03 ftl -- ftl/ftl.sh@23 -- # killprocess 81140 00:27:02.750 12:25:03 ftl -- common/autotest_common.sh@954 -- # '[' -z 81140 ']' 00:27:02.750 12:25:03 ftl -- common/autotest_common.sh@958 -- # kill -0 81140 00:27:02.750 12:25:03 ftl -- common/autotest_common.sh@959 -- # uname 00:27:02.750 12:25:03 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:02.750 12:25:03 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81140 00:27:02.750 killing process with pid 81140 00:27:02.750 12:25:03 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:02.750 12:25:03 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:02.750 12:25:03 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81140' 00:27:02.750 12:25:03 ftl -- common/autotest_common.sh@973 -- # kill 81140 00:27:02.750 12:25:03 ftl -- common/autotest_common.sh@978 -- # wait 81140 00:27:04.122 12:25:04 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:04.122 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:04.122 Waiting for block devices as requested 00:27:04.122 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:27:04.122 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:27:04.122 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:27:04.380 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:27:09.643 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:27:09.643 Remove shared memory files 00:27:09.643 12:25:10 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:27:09.643 12:25:10 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:09.643 12:25:10 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:27:09.643 12:25:10 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:27:09.643 12:25:10 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:27:09.643 12:25:10 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:09.643 12:25:10 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:27:09.643 00:27:09.643 real 8m34.813s 00:27:09.643 user 10m57.788s 00:27:09.643 sys 1m4.478s 00:27:09.643 12:25:10 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:09.643 12:25:10 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:09.643 ************************************ 00:27:09.643 END TEST ftl 00:27:09.643 ************************************ 00:27:09.643 12:25:10 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:27:09.643 12:25:10 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:27:09.643 12:25:10 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:27:09.643 12:25:10 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:27:09.643 12:25:10 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:27:09.643 12:25:10 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:27:09.643 12:25:10 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:27:09.643 12:25:10 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:27:09.643 12:25:10 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:27:09.643 12:25:10 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:27:09.643 12:25:10 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:09.643 12:25:10 -- common/autotest_common.sh@10 -- # set +x 00:27:09.643 12:25:10 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:27:09.643 12:25:10 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:27:09.644 12:25:10 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:27:09.644 12:25:10 -- common/autotest_common.sh@10 -- # set +x 00:27:10.577 INFO: APP EXITING 00:27:10.577 INFO: killing all VMs 00:27:10.577 INFO: killing vhost app 00:27:10.577 INFO: EXIT DONE 00:27:10.835 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:11.094 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:27:11.094 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:27:11.094 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:27:11.094 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:27:11.352 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:11.967 Cleaning 00:27:11.967 Removing: /var/run/dpdk/spdk0/config 00:27:11.967 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:27:11.967 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:27:11.967 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:27:11.967 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:27:11.967 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:27:11.967 Removing: /var/run/dpdk/spdk0/hugepage_info 00:27:11.967 Removing: /var/run/dpdk/spdk0 00:27:11.967 Removing: /var/run/dpdk/spdk_pid56957 00:27:11.967 Removing: /var/run/dpdk/spdk_pid57170 00:27:11.967 Removing: /var/run/dpdk/spdk_pid57383 00:27:11.967 Removing: /var/run/dpdk/spdk_pid57476 00:27:11.967 Removing: /var/run/dpdk/spdk_pid57515 00:27:11.967 Removing: /var/run/dpdk/spdk_pid57638 00:27:11.967 Removing: /var/run/dpdk/spdk_pid57656 00:27:11.967 Removing: /var/run/dpdk/spdk_pid57844 00:27:11.967 Removing: /var/run/dpdk/spdk_pid57936 00:27:11.967 Removing: /var/run/dpdk/spdk_pid58021 00:27:11.967 Removing: /var/run/dpdk/spdk_pid58132 00:27:11.967 Removing: /var/run/dpdk/spdk_pid58229 00:27:11.967 Removing: /var/run/dpdk/spdk_pid58274 00:27:11.967 Removing: /var/run/dpdk/spdk_pid58305 00:27:11.967 Removing: /var/run/dpdk/spdk_pid58381 00:27:11.967 Removing: /var/run/dpdk/spdk_pid58487 00:27:11.967 Removing: /var/run/dpdk/spdk_pid58923 00:27:11.967 Removing: /var/run/dpdk/spdk_pid58987 00:27:11.967 Removing: /var/run/dpdk/spdk_pid59045 00:27:11.967 Removing: /var/run/dpdk/spdk_pid59061 00:27:11.967 Removing: /var/run/dpdk/spdk_pid59168 00:27:11.967 Removing: /var/run/dpdk/spdk_pid59190 00:27:11.967 Removing: /var/run/dpdk/spdk_pid59297 00:27:11.967 Removing: /var/run/dpdk/spdk_pid59313 00:27:11.967 Removing: /var/run/dpdk/spdk_pid59372 00:27:11.967 Removing: /var/run/dpdk/spdk_pid59390 00:27:11.967 Removing: /var/run/dpdk/spdk_pid59448 00:27:11.967 Removing: /var/run/dpdk/spdk_pid59471 00:27:11.967 Removing: /var/run/dpdk/spdk_pid59650 00:27:11.967 Removing: /var/run/dpdk/spdk_pid59687 00:27:11.967 Removing: /var/run/dpdk/spdk_pid59776 00:27:11.967 Removing: /var/run/dpdk/spdk_pid59959 00:27:11.967 Removing: /var/run/dpdk/spdk_pid60043 00:27:11.967 Removing: /var/run/dpdk/spdk_pid60085 00:27:11.967 Removing: /var/run/dpdk/spdk_pid60551 00:27:11.967 Removing: /var/run/dpdk/spdk_pid60647 00:27:11.967 Removing: /var/run/dpdk/spdk_pid60762 00:27:11.967 Removing: /var/run/dpdk/spdk_pid60815 00:27:11.967 Removing: /var/run/dpdk/spdk_pid60846 00:27:11.967 Removing: /var/run/dpdk/spdk_pid60924 00:27:11.967 Removing: /var/run/dpdk/spdk_pid61557 00:27:11.967 Removing: /var/run/dpdk/spdk_pid61599 00:27:11.967 Removing: /var/run/dpdk/spdk_pid62089 00:27:11.967 Removing: /var/run/dpdk/spdk_pid62188 00:27:11.967 Removing: /var/run/dpdk/spdk_pid62297 00:27:11.967 Removing: /var/run/dpdk/spdk_pid62350 00:27:11.967 Removing: /var/run/dpdk/spdk_pid62370 00:27:11.967 Removing: /var/run/dpdk/spdk_pid62401 00:27:11.967 Removing: /var/run/dpdk/spdk_pid64236 00:27:11.967 Removing: /var/run/dpdk/spdk_pid64368 00:27:11.967 Removing: /var/run/dpdk/spdk_pid64381 00:27:11.967 Removing: /var/run/dpdk/spdk_pid64399 00:27:11.967 Removing: /var/run/dpdk/spdk_pid64439 00:27:11.967 Removing: /var/run/dpdk/spdk_pid64443 00:27:11.967 Removing: /var/run/dpdk/spdk_pid64455 00:27:11.967 Removing: /var/run/dpdk/spdk_pid64501 00:27:11.968 Removing: /var/run/dpdk/spdk_pid64505 00:27:11.968 Removing: /var/run/dpdk/spdk_pid64517 00:27:11.968 Removing: /var/run/dpdk/spdk_pid64562 00:27:11.968 Removing: /var/run/dpdk/spdk_pid64566 00:27:11.968 Removing: /var/run/dpdk/spdk_pid64578 00:27:11.968 Removing: /var/run/dpdk/spdk_pid65961 00:27:11.968 Removing: /var/run/dpdk/spdk_pid66058 00:27:11.968 Removing: /var/run/dpdk/spdk_pid67464 00:27:11.968 Removing: /var/run/dpdk/spdk_pid69226 00:27:11.968 Removing: /var/run/dpdk/spdk_pid69294 00:27:11.968 Removing: /var/run/dpdk/spdk_pid69369 00:27:11.968 Removing: /var/run/dpdk/spdk_pid69480 00:27:11.968 Removing: /var/run/dpdk/spdk_pid69566 00:27:11.968 Removing: /var/run/dpdk/spdk_pid69662 00:27:11.968 Removing: /var/run/dpdk/spdk_pid69736 00:27:11.968 Removing: /var/run/dpdk/spdk_pid69810 00:27:11.968 Removing: /var/run/dpdk/spdk_pid69910 00:27:11.968 Removing: /var/run/dpdk/spdk_pid70007 00:27:11.968 Removing: /var/run/dpdk/spdk_pid70103 00:27:11.968 Removing: /var/run/dpdk/spdk_pid70166 00:27:11.968 Removing: /var/run/dpdk/spdk_pid70248 00:27:11.968 Removing: /var/run/dpdk/spdk_pid70352 00:27:11.968 Removing: /var/run/dpdk/spdk_pid70444 00:27:11.968 Removing: /var/run/dpdk/spdk_pid70545 00:27:11.968 Removing: /var/run/dpdk/spdk_pid70619 00:27:11.968 Removing: /var/run/dpdk/spdk_pid70689 00:27:11.968 Removing: /var/run/dpdk/spdk_pid70799 00:27:11.968 Removing: /var/run/dpdk/spdk_pid70896 00:27:11.968 Removing: /var/run/dpdk/spdk_pid70992 00:27:11.968 Removing: /var/run/dpdk/spdk_pid71066 00:27:11.968 Removing: /var/run/dpdk/spdk_pid71140 00:27:11.968 Removing: /var/run/dpdk/spdk_pid71210 00:27:11.968 Removing: /var/run/dpdk/spdk_pid71290 00:27:11.968 Removing: /var/run/dpdk/spdk_pid71393 00:27:11.968 Removing: /var/run/dpdk/spdk_pid71491 00:27:11.968 Removing: /var/run/dpdk/spdk_pid71586 00:27:11.968 Removing: /var/run/dpdk/spdk_pid71655 00:27:11.968 Removing: /var/run/dpdk/spdk_pid71729 00:27:11.968 Removing: /var/run/dpdk/spdk_pid71802 00:27:11.968 Removing: /var/run/dpdk/spdk_pid71872 00:27:11.968 Removing: /var/run/dpdk/spdk_pid71982 00:27:11.968 Removing: /var/run/dpdk/spdk_pid72068 00:27:11.968 Removing: /var/run/dpdk/spdk_pid72212 00:27:11.968 Removing: /var/run/dpdk/spdk_pid72496 00:27:11.968 Removing: /var/run/dpdk/spdk_pid72527 00:27:11.968 Removing: /var/run/dpdk/spdk_pid72983 00:27:11.968 Removing: /var/run/dpdk/spdk_pid73167 00:27:11.968 Removing: /var/run/dpdk/spdk_pid73266 00:27:11.968 Removing: /var/run/dpdk/spdk_pid73373 00:27:11.968 Removing: /var/run/dpdk/spdk_pid73423 00:27:11.968 Removing: /var/run/dpdk/spdk_pid73450 00:27:11.968 Removing: /var/run/dpdk/spdk_pid73768 00:27:11.968 Removing: /var/run/dpdk/spdk_pid73823 00:27:11.968 Removing: /var/run/dpdk/spdk_pid73898 00:27:11.968 Removing: /var/run/dpdk/spdk_pid74293 00:27:11.968 Removing: /var/run/dpdk/spdk_pid74438 00:27:11.968 Removing: /var/run/dpdk/spdk_pid75239 00:27:11.968 Removing: /var/run/dpdk/spdk_pid75371 00:27:11.968 Removing: /var/run/dpdk/spdk_pid75535 00:27:11.968 Removing: /var/run/dpdk/spdk_pid75630 00:27:11.968 Removing: /var/run/dpdk/spdk_pid75916 00:27:11.968 Removing: /var/run/dpdk/spdk_pid76154 00:27:11.968 Removing: /var/run/dpdk/spdk_pid76481 00:27:11.968 Removing: /var/run/dpdk/spdk_pid76657 00:27:11.968 Removing: /var/run/dpdk/spdk_pid76743 00:27:11.968 Removing: /var/run/dpdk/spdk_pid76802 00:27:11.968 Removing: /var/run/dpdk/spdk_pid76891 00:27:11.968 Removing: /var/run/dpdk/spdk_pid76916 00:27:11.968 Removing: /var/run/dpdk/spdk_pid76963 00:27:11.968 Removing: /var/run/dpdk/spdk_pid77122 00:27:11.968 Removing: /var/run/dpdk/spdk_pid77333 00:27:11.968 Removing: /var/run/dpdk/spdk_pid77599 00:27:11.968 Removing: /var/run/dpdk/spdk_pid77874 00:27:11.968 Removing: /var/run/dpdk/spdk_pid78226 00:27:11.968 Removing: /var/run/dpdk/spdk_pid78586 00:27:11.968 Removing: /var/run/dpdk/spdk_pid78717 00:27:11.968 Removing: /var/run/dpdk/spdk_pid78804 00:27:11.968 Removing: /var/run/dpdk/spdk_pid79193 00:27:11.968 Removing: /var/run/dpdk/spdk_pid79257 00:27:11.968 Removing: /var/run/dpdk/spdk_pid79550 00:27:11.968 Removing: /var/run/dpdk/spdk_pid79816 00:27:11.968 Removing: /var/run/dpdk/spdk_pid80156 00:27:11.968 Removing: /var/run/dpdk/spdk_pid80272 00:27:11.968 Removing: /var/run/dpdk/spdk_pid80314 00:27:11.968 Removing: /var/run/dpdk/spdk_pid80382 00:27:11.968 Removing: /var/run/dpdk/spdk_pid80439 00:27:11.968 Removing: /var/run/dpdk/spdk_pid80497 00:27:11.968 Removing: /var/run/dpdk/spdk_pid80688 00:27:11.968 Removing: /var/run/dpdk/spdk_pid80749 00:27:11.968 Removing: /var/run/dpdk/spdk_pid80806 00:27:11.968 Removing: /var/run/dpdk/spdk_pid80946 00:27:11.968 Removing: /var/run/dpdk/spdk_pid80976 00:27:11.968 Removing: /var/run/dpdk/spdk_pid81043 00:27:12.226 Removing: /var/run/dpdk/spdk_pid81140 00:27:12.226 Clean 00:27:12.226 12:25:13 -- common/autotest_common.sh@1453 -- # return 0 00:27:12.226 12:25:13 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:27:12.226 12:25:13 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:12.226 12:25:13 -- common/autotest_common.sh@10 -- # set +x 00:27:12.226 12:25:13 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:27:12.226 12:25:13 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:12.226 12:25:13 -- common/autotest_common.sh@10 -- # set +x 00:27:12.226 12:25:13 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:12.226 12:25:13 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:27:12.226 12:25:13 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:27:12.226 12:25:13 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:27:12.226 12:25:13 -- spdk/autotest.sh@398 -- # hostname 00:27:12.226 12:25:13 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:27:12.226 geninfo: WARNING: invalid characters removed from testname! 00:27:38.759 12:25:35 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:38.759 12:25:38 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:39.693 12:25:40 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:41.593 12:25:42 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:42.968 12:25:43 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:44.903 12:25:45 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:46.803 12:25:47 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:27:47.061 12:25:47 -- spdk/autorun.sh@1 -- $ timing_finish 00:27:47.061 12:25:47 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:27:47.061 12:25:47 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:27:47.061 12:25:47 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:27:47.061 12:25:47 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:47.061 + [[ -n 5022 ]] 00:27:47.061 + sudo kill 5022 00:27:47.072 [Pipeline] } 00:27:47.092 [Pipeline] // timeout 00:27:47.098 [Pipeline] } 00:27:47.115 [Pipeline] // stage 00:27:47.122 [Pipeline] } 00:27:47.137 [Pipeline] // catchError 00:27:47.149 [Pipeline] stage 00:27:47.152 [Pipeline] { (Stop VM) 00:27:47.165 [Pipeline] sh 00:27:47.443 + vagrant halt 00:27:49.968 ==> default: Halting domain... 00:27:54.163 [Pipeline] sh 00:27:54.440 + vagrant destroy -f 00:27:56.968 ==> default: Removing domain... 00:27:57.564 [Pipeline] sh 00:27:57.841 + mv output /var/jenkins/workspace/nvme-vg-autotest_2/output 00:27:57.850 [Pipeline] } 00:27:57.865 [Pipeline] // stage 00:27:57.871 [Pipeline] } 00:27:57.886 [Pipeline] // dir 00:27:57.892 [Pipeline] } 00:27:57.906 [Pipeline] // wrap 00:27:57.912 [Pipeline] } 00:27:57.925 [Pipeline] // catchError 00:27:57.934 [Pipeline] stage 00:27:57.935 [Pipeline] { (Epilogue) 00:27:57.950 [Pipeline] sh 00:27:58.228 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:28:03.496 [Pipeline] catchError 00:28:03.498 [Pipeline] { 00:28:03.509 [Pipeline] sh 00:28:03.785 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:28:03.785 Artifacts sizes are good 00:28:03.794 [Pipeline] } 00:28:03.809 [Pipeline] // catchError 00:28:03.820 [Pipeline] archiveArtifacts 00:28:03.827 Archiving artifacts 00:28:03.922 [Pipeline] cleanWs 00:28:03.932 [WS-CLEANUP] Deleting project workspace... 00:28:03.932 [WS-CLEANUP] Deferred wipeout is used... 00:28:03.937 [WS-CLEANUP] done 00:28:03.938 [Pipeline] } 00:28:03.953 [Pipeline] // stage 00:28:03.959 [Pipeline] } 00:28:03.973 [Pipeline] // node 00:28:03.978 [Pipeline] End of Pipeline 00:28:04.055 Finished: SUCCESS